Thanks for taking a look.
]]>Thank you for reaching out to us with this very well documented and thorough report of the issues. I really appreciate the time and effort you put into this. We’re going to look into this and see what we can do. Do you know approximately when these crashes started to occur? Was it after the latest update of OS or Unity or before? You’re using 6000.0.45f1 so I would think it went back to April ‘25 at least, but the OS build is 1.10.0 from Oct ‘24 so this would likely have been going on for a while if it was related to the OS. Did you experiment with B3E.251015.01 to see if it replicates the issue on the latest Nov ‘25 OS release 1.12.1?
Could you send the full bug report to [email protected]? It should allow you to upload the whole thing and then I can transfer it from there and we can take a look at it.
]]>We are experiencing frequent, seemingly random native crashes on Magic Leap 2 devices across our fleet, all running OS build B3E.240905.12-R.038. These crashes occur on multiple different devices and persist across Unity editor and package version upgrades.
Our Unity app has shown these issues when built with Unity 6000.0.45f1 and 6000.3.11f1, we also use OpenXR features related to hand tracking, localization, meshing, and spatial anchors. The exact list is shown in the attached screenshot.
The app is crashing with SIGILL and SIGSEGV signals originating from within Magic Leap’s native libraries — primarily libml_perception_session.so and libunity.so. We are also seeing identical crash patterns in the Leap Brush sample app (com.magicleap.leapbrush), which leads us to believe the root cause is deeper than our application code. Here’s a tombstone from the bugreport taken that captures the crash in the LeapBrush app, as well as the log file.
bugreport-demophon_aosp-B3E.240905.12-R.038-2026-03-13-13-34-04.txt (8.6 MB)
leapbrush tombstone_09.txt (1.0 MB)
Here are the descriptions of the crashes we see with our app:
The most common crash. The stack trace shows xrLocateSpace → Session::GetPose → ml_pose_combine_fn hitting an illegal instruction. We have captured multiple identical instances of this crash back-to-back on the same device (see the attached tombstone_08 and tombstone_09).
We captured a crash in the system compositor (magicflinger) where HeadTracking::RePredictPose in libml_perception_head_tracking.so segfaulted during CompositorBackend::UpdatePerceptionState. This crash caused a full device restart.
Captured via Sentry — a SIGILL crash in the OpenXR input provider during xrSyncActions, called from Unity’s XRInputSubsystem::Update in the normal player loop.
Using the Unity Profiler, we captured two instances of the OpenXR runtime stalling on the main thread:
In both cases, there was nothing underneath these calls in the profiler — Unity is waiting on a single native call into the OpenXR runtime that simply does not return for seconds at a time. These stalls cause display loss on the headset.
We also captured a SIGABRT from an uncaught ml::perception::Exception in SpatialAnchor::UpdateTrackedAnchors().
What We Have Tried:
From our application (8 crashes on a single device, Feb 11 – Mar 4):
Entire bugreport zips were too large to upload here, but I can send them if they would be helpful. Any guidance as to how to increase the stability of our app on the headset and to mitigate the multi-second stalls in xrBeginFrame and xrSyncActions would be greatly appreciated. Thanks.
bugreport-demophon_aosp-B3E.240905.12-R.038-2026-03-04-13-44-34.txt (9.2 MB)
tombstone_00.txt (1.1 MB)
tombstone_03.txt (1.2 MB)
tombstone_09.txt (1.2 MB)
tombstone_08.txt (1.2 MB)
tombstone_07.txt (1.1 MB)
tombstone_06.txt (1.3 MB)
tombstone_05.txt (1.3 MB)
tombstone_02.txt (2.1 MB)
]]>I’m going to have to look into this more but could you share with me some images or videos of the example test space you’re using? I’d like to see how you have it set up and see where the drifting happens if possible. Would you be able to share that?
Thank you,
]]>we have the problem that we want to localize within a space that contains multiple zones. Each zone looks identical to the others.
We need to display objects (in our case, squares) at specific positions, and these objects must remain fixed at exactly the points where we placed them, even when we move around inside the zone. This works initially, but because the zones look similar, the ML loses its localization and jumps to a different zone. When that happens, the ML jumps to a different starting point and draws the objects depending on the new starting point.
To prevent this, we want to support localization using markers (e.g., ArUco tags, large numbers, or large characters) or differences such as different colors on surfaces for each zone.
For this reason, we want to understand whether the ML uses the world camera and visual information from the environment for localization. More specifically: can the ML detect different colors or symbols and use these visual differences to determine the correct position inside a space?
We also tried creating a separate space for each zone, but even then the ML localized to the wrong zone while running the application.
Another approach we tested was skipping room scanning entirely and placing anchors only via markers.
However, even with this method, we observed drifting.
This raises the question of how the ML establishes its coordinate system and what methods it uses to determine distance and position relative to the markers. For example:
– Does a marker need to be visible at all times?
– Can we build a consistent coordinate system without a spatial scan by aligning the coordinate origin to a marker?
Ideally, we would have three markers visible to the camera. Based on the initial view, we would draw the object positions, and whenever enough markers are visible, we could sync.
For this to work, it would be important to know how to set the coordinate system’s origin to a specific marker so that our entire system remains consistent around a fixed point instead of around the ML’s internal starting point.
From the HoloLens, we know that it was able to determine its position relative to three infrared markers. Therefore, we are wondering whether something similar is possible with the ML.
Thank u for ur help.
]]>I’ll look into this and see what I can find. Is there a specific use case you’re working on that you could tell me more about or is this more of an academic question about the workings of the technology?
Thank you,
]]>I was having difficulties locating some third-party libraries with CMake. It took me some time to find that the problem arose since I’ve included Magic Leap SDK in my project.
Long story short: Magic Leap SDK’s root find package module file ENV{MLSDK}/cmake/FindMagicLeap.cmakeincorrectly overwrites the CMAKE_MODULE_PATH variable. Indeed, line 105 (for Magic Leap SDK v1.12.0) reads as follows:
set(CMAKE_MODULE_PATH "${FOUND_MLSDK}/cmake/")
The CMAKE_MODULE_PATH variable is a list of paths, not a single path. It’s set by the consumer project. Unconditionally setting it as actually done in the Magic Leap SDK will clear what has been set by the consumer project, which is a problem IMHO, but opinions may vary ![]()
The fix is easy: Add the path of the Magic Leap SDK to the CMAKE_MODULE_PATH variable rather than setting it. Line 105 thus becomes:
list(APPEND CMAKE_MODULE_PATH "${FOUND_MLSDK}/cmake/")
If you think of an other way to fix this, please let me know. At the moment, in my consumer project, I’m saving the value of CMAKE_MODULE_PATH variable before calling find_package(MagicLeap) and then restore the saved value.
Just so I’m understanding correctly, you made a Unity project that is trying to use the ML Spectator feature and are now having issues connecting the spectator phone app to your Unity app?
Best,
Corey
Thank you for the great feedback!
I’ll relay this to the proper team and see what we can do.
Best,
Corey
I’ve been looking into this a bit more for you.
You can try using an older WebRTC package if that’s viable for your project to see if you can force the encoder to be hardware driven.
Also, just out of curiosity, could you try an older version of Unity, like 2022.3 LTS to see if you get the same behavior?
If not, that’s totally fine, just trying to pin down the root cause here.
Best,
Corey
MarkerData class were to have a field like UTCTimestamp that contains a DateTime of when the picture was taken that was used to calculate the pose. This would be huge for my team. We are looking into computer vision alternatives just so we can generate that feature ourselves. ]]>At the same time, I have run the ML Spectator app on my Android phone. Both phone and ML2 are connected to the same wifi. At the ML Spectator app on my phone, I can see the IP address of the ML_Spectator Unity app, but when I try to connect to that IP, it failed
]]>Has there been someone who managed to get the Hardware accelerated H264 encoder to work with Unity WebRTC?
]]>Do you recommend using an older WebRTC package?
]]>Correct, it should not be defaulting to the mesa encoder, to me that seems like the main issue here.
This may be coming from the Unity WebRTC Package.
If possible, please try and force Unity WebRTC to initialize with the hardware encoder:
WebRTC.Initialize(EncoderType.Hardware);
Please let me know if this helps at all!
Best,
Corey
webCameraName = “Camera 2”;
We are currently aiming for lower than that and using the same camera (Which is the only MR to my knowledge):
Debug ThirdEye-HAL Setting up recording session Width 960 Height 720 Format 35 usage 515 flags 0 Type 3 FPS 30
But the Buffer is allocating a much bigger chunk:
Debug MLMRCamera-Buffers init BufferItemConsumer setting width 2048 height 1536 buffer count 6 format 35
Not sure if that’s intended or not, I’m guessing you guys just downscale and always allocate the max? Im not usually the guy that works on our calls but the others are busy. This is a bit of a new area for me.
I’ve tried VP8 and that seems to fill up even faster than the H264.
Also, from the following log line, I dont think the device should be defaulting to the mesa encoder:
Info org.webrtc.Logging HardwareVideoEncoder: initEncode name: OMX.mesa.video_encoder.avc type: H264 width: 960 height: 720 framerate_fps: 60 bitrate_kbps: 283 surface mode: false
I’ve tried these diligently, and these don’t work.
]]>Unity Editor version: 6000.0.62f1
ML2 OS version: Version 1.12.1
Build B3E.251015.01
Android API Level 29
MLSDK version: 2.6.0-pre.R15
Error messages from logs:
We had put a pause on our Magic Leap unity build for the past year. We recently got back into it and making sure the build is up to date with our new features. We are currently having issues with the calls, specifically the video which stops sending frames after about 5-10 seconds.
Our calls using Unity WebRTC used to work very well. We did update a lot of things when booting the project back up: Unity 6, WebRTC 3.0.0 and of course the ML OS and SDK.
Looking through the android logs im seeing a new system that I dont remember seeing back in the day (MLMRCamera3Hal). Im getting constant chatter from this system for every frame:
2026-02-23 18:43:32.796 3360 3523 Warn Camera3-Device No cvip timestamp available
2026-02-23 18:43:32.796 3294 9174 Info MLMRCamera3Hal ENTRY : processCaptureRequest
2026-02-23 18:43:32.796 3294 9174 Info MLMRCamera3Hal EXIT : processCaptureRequest
From our web rtc stats I consistently get around ~300 encoded frames before the Encoding Queue fills up:
WebRTC stats: {“t”:“2026-02-23T19:29:17.285559Z”,“lp”:“n1”,“rp”:“nl”,“a”:{“snd”:{“lvl”:0.0,“b”:24754},“rcv”:{“lvl”:0.0,“b”:31414}},“v”:{“snd”:{“enc”:291,“df”:290,“kf”:31,“b”:677510}}}
Then the new spam of logs are:
2026-02-23 19:29:17.450 3360 3533 Warn Camera3-Device No cvip timestamp available
2026-02-23 19:29:17.450 3294 3294 Info MLMRCamera3Hal ENTRY : processCaptureRequest
2026-02-23 19:29:17.450 3294 3294 Info MLMRCamera3Hal EXIT : processCaptureRequest
2026-02-23 19:29:17.456 23070 4963 Error org.webrtc.Logging HardwareVideoEncoder: Dropped frame, encoder queue full
2026-02-23 19:29:17.472 23070 4963 Error org.webrtc.Logging HardwareVideoEncoder: Dropped frame, encoder queue full
Some initial Encoder logs if it helps:
org.webrtc.Logging HardwareVideoEncoder: Format: {color-format=21, i-frame-interval=3600, mime=video/avc, width=960, bitrate-mode=2, bitrate=283000, frame-rate=60.0, height=720}
And right after it updates the format, unsure if this is expected:
org.webrtc.Logging HardwareVideoEncoder: updateInputFormat format: {color-transfer=0, color-format=21, slice-height=720, image-data=java.nio.HeapByteBuffer[pos=0 lim=104 cap=104], mime=video/raw, width=960, stride=960, color-range=0, color-standard=0, height=720} stride: 960 sliceHeight: 720 isSemiPlanar: true frameSizeBytes: 1036800
Looked at your forums and I heard mention of some Unity WebRTC examples but going through your examples I dont see anything relating to WebRTC.
Curious if there is a new way of accessing the camera that I should be doing on the Magic Leap. We have the same Android webrtc stack working on the Argo and Realwear but I’m currently struggling with this one.
Any info would be greatly appreciated!
]]>From what I know, this may be an android related issue.
A few things you can try to see if the issue resolves itself:
Keep wifi enabled
Do NOT connect to a network
Plug in the ethernet
Try streaming again
Please let me know if any of these worked or not.
Best,
Corey
Yes, the MLCamera API is deprecated. You can still use it but you may run into issues.
The current recommended way to capture and visualize the cameras is via the Pixel Sensor API.
Here is some documentation with an example script for exactly what you’re looking for:
Also, you can always take a look at the Magic Leap Examples Unity Project from the MLHub. You can find fully built scenes for various features, including accessing and visualizing a camera stream that you can use as reference.
Just in case you still need to download and try the MLHub, here is the download page for it to get you started!
https://ml2-developer.magicleap.com/downloads
Please let me know if you have anymore questions! Good luck to you with your project.
Best,
Corey
I’m trying to access the real RGB camera of Magic Leap 2 through Unity. I want to:
Display the camera feed in a RawImage (or similar UI panel).
Capture a photo and save it to disk.
However, whenever I try to use MLCamera, it always says the API is deprecated. I’m not sure how to proceed using the current Unity SDK.
I’ve seen references to MagicLeap.Android.AndroidCamera, but I’m not sure how to implement it correctly for continuous preview and taking pictures in Unity.
Does anyone have a working approach or example for:
Showing the real camera feed in a Unity UI (RawImage or 3D quad)
Capturing a still image and saving it
Using the current, supported APIs (not deprecated)
Any guidance or example scripts would be really appreciated.
Thanks!
]]>We are trying to connect MagicLeap to ethernet as you can see on the image, by using an ethernet and charging compatible USB-C adapter between the MagicLeap and the router. (We are doing this to avoid wifi, because we are planning to use this in a high EMC environment where wifi doesn’t work reliably.)
When we do this, the device stream in the MagicLeap hub can’t open. It says Stream error. All other functions work. Only problem is the stream not starting. The MagicLeap Hub is connected, device files and apps are visible. Our own app can use and send sensor data and frames with this setup. The device charges as well, with its original charger connected through the USB hub.
Note that with this setup, the MAC and IP addresses are actually not the Magicleap’s, but the USB hub’s instead. Maybe this induces some rarely seen bug which prevents stream opening.
I’m asking for help to get this stream going somehow.
Thanks.
]]>This sounds like overload to me. The device might be scanning for networks, but there’s so many that the scan encounters a timeout or something similar. The ML2 may not be filtering as many networks as a mobile phone does.
You could try manually adding a network if possible instead of scanning.
Another option may be to limit the wifi scan to 5Gz or 2.4Gz if possible on the device. This may not be possible but I will try to confirm on my own personal device once I have the time.
Sorry you encountered this issue! Please let us know if you found a solution or if you need any further assistance.
Best,
Corey
I think what’s going on here is that Cesium is overriding the camera configuration at runtime or during initialization. Cesium’s own camera is managed and certain properties are set at runtime. That’s why setting the property in the inspector doesn’t work, because Cesium overrides it later.
Try to ensure that the camera background is set to Solid Color and background color is (0,0,0,0) at runtime via a script that sets these properties after Cesium has applied it’s configuration at runtime. You can try doing this in a LateUpdate().
If that ends up working for you, you can try to find a better/alternate solution that prevents Cesium from setting these properties at runtime, like creating your own camera setup but using XR Rig as your base camera instead of Cesium’s default Camera.
Best,
Corey
Unity Editor version: 2022.3.21f1
ML2 OS version: 1.5 (using OpenXR)
MLSDK version: 2.5.0
I’m using the Unity ML Examples 1.5 screen capture script to record Mixed Reality (MR) videos in my Unity application on Magic Leap 2.
Recording works correctly in most scenes. However, in one scene that uses Cesium for Unity, the MR portion of the recording (1440×1080 area) shows a black background instead of the real world. The Cesium map renders correctly, but the background behind it is solid black in the recorded video.
Important details:
Cesium uses its own camera.
I attempted to set the camera background alpha to 0 in the Inspector.
After building and deploying, the alpha value resets to 255.
If I use the ML2 Video Recorder tool and manually reduce the opacity below 0, the recording works correctly and the background shows the real world as expected.
My question:
Is there a way to programmatically set the recording camera’s opacity (alpha) below 0 at runtime so the MR capture includes the real-world background correctly?
Or is there a specific configuration required when using Cesium’s camera with Magic Leap’s MR recording pipeline?
Any guidance would be greatly appreciated.
]]>We were able to confirm that the devices were working fine outside of the convention so I’m leaning towards the headsets not being able to understand the huge number of networks around it.
We’re seeing 650+ networks in our area alone across both 2.4 and 5ghz.
Has anyone else seen this while running OS 1.12.1? We have previously ran in similar conditions on older OS versions with no issues.
]]>This question falls a little outside of the scope of the forums as we do not typically provide custom code reviews, but I will help try and point you in the right direction here.
Just glancing over some of your script, this line jumped out at me a bit:
xy = input_pt/self.resolution - 0.5
This assumes that the magic leap intrinsic values are in normalized space (0-1), but the values are actually given in pixel space, which is a much wider range. There may be more things like that dashed throughout the code, so give it a good look over and ensure that you are running calculations in the correct space.
If you have any more questions please let me know! Good luck to you with your project!
Best,
Corey
For example, I scanned an ArUco marker without being localized. What exactly does the ML do so that the objects generated from the ArUco marker remain in their position even after I remove the marker or move away from it?
From the code I understand that ML creates its own coordinate system. But what is the starting point (origin) of this system, and is it possible to move the coordinate system’s origin (0,0,0) to the scanned marker?
My last question is: Is it possible to use infrared markers to help the ML orient itself in Spaces that look almost identical, so that it does not switch between them?
More generally, could infrared markers help the headset improve its spatial orientation?
greetings and thx for the help ![]()
This is a common issue with the way that ADB pulls many small files from a device.
For example, transferring 1 300MB file is much faster than transferring 1 300MB folder of many files (100s or thousands). The amount of data may be the same, but the performance is much different when dealing with 1 file vs many files at once.
One suggestion would be to take the many small image files and put them inside 1 single container like a .zip, .tar, or some other format. You would then need to handle decompression/extraction, but the transfer time should be much shorter.
Let me know if you have any other issues or if you find a solution that works for you!
Best,
Corey
We ran into an issue with the Github project - it seems to be highly dependent on an “ml2irtrackingplugin”, which my co-coder can’t find anywhere in the project. Would it be possible for you to embed it in this repo and, if possible, include the code to compile it myself?
My teammate added an Issue in the github with that question and a few other questions, but I wanted to add this here in case the notification for that Issue did not go through.
Thanks again!
Graeme
]]>You can use the Pixel Sensor API to obtain images from the world cameras, however they will continue to be used for tracking.
]]>I’m using Magic Leap Hub 3 (v3.0) to download recorded data from a Magic Leap 2 headset over USB-C (stock/original cable). The data is generated by our own application and consists of saved image files (e.g., camera frames) stored on the headset. Download sometimes proceeds normally, but occasionally becomes very slow.
Are there known issues or best practices for transferring many small files (images) from ML2 via Hub 3 over USB-C?
Is there a supported faster method for bulk transfer of app data?
Thank you!
Jenny
]]>Noah Schiffman.
]]>ML2 OS version: 1.12.1
I am trying to use the raw and processed depth images streamed to a python client to track tools with IR reflective spheres attached to them. Specifically, I use the raw depth image to capture the locations of the spheres in 2D, and then use the processed depth frame to estimate their 3D position. The tracking seems to only work at a specific depth - otherwise, the detected spheres resemble a scaled-up or scaled-down version of my tool. I’m using the following code to undistort the sphere centroids acquired from raw depth image, and computing the ray that should resemble sphere position in local depth camera space, inspired by this post: Processing the depth frames - Unity Development - Magic Leap 2 Developer Forums.
def undistort(input_pt):
xy = input_pt/self.resolution - np.double(0.5)
r2 = np.sum(xy\*xy)
r4 = r2 \* r2
r6 = r4 \* r2
xy_rd = xy \* (1 + (self.d.k1 \* r2) + (self.d.k2 \* r4) + (self.d.k3 \* r6))
xtd = (2 \* self.d.p1 \* xy\[0\] \* xy\[1\]) + (self.d.p2 \* (r2 + (2 \* xy\[0\] \* xy\[0\])))
ytd = (2 \* self.d.p2 \* xy\[0\] \* xy\[1\]) + (self.d.p1 \* (r2 + (2 \* xy\[1\] \* xy\[1\])))
xy_rd\[0\] += xtd
xy_rd\[1\] += ytd
xy_rd += np.double(0.5)
return (xy_rd \* resolution - center)/focal
#getting centriods
u = M[“m10”] / M[“m00”]
v = M[“m01”] / M[“m00”]
depth = raw_frame.depth[int(v), int(u)] * 1000 #convert to mm
uv = [u, v]
unit_vec = undistort(uv)
ir_tool_centers.extend([
unit_vec[0],
unit_vec[1],
depth
])
#spheres xyz a reformatted version of ir_tool_centers
xyz = spheres_xyz[i,:].copy() # [x_ray, y_ray, depth]
xyz[2] += cur_radius # z’ = depth + radius
temp_vec = np.array([xyz[0], xyz[1], 1])
spheres_xyz[i,:] = temp_vec / np.linalg.norm(temp_vec) * xyz[2]
Is this code correct? If not, what should I do to increase the robustness of the tracking?
]]>I’m trying to use an OptiTrack motion capture system to override the Magic Leap 2’s internal head tracking. The goal is to have the ML2 camera position/rotation controlled entirely by OptiTrack when
tracking data is available, and fall back to ML2’s onboard tracking when OptiTrack is unavailable.
Setup
The Problem:
No matter what I try, the ML2’s internal head tracking keeps interfering. When I move my head, virtual objects wobble/shake because both tracking systems seem to fight for control. When OptiTrack is not sending data, the view is completely frozen (no tracking at all).
What I’ve Tried
Hierarchy
ML Rig (XR Origin)
└── Camera Offset
└── Main Camera (TrackedPoseDriver: Update and Before Render)
Virtual objects are in world space (not parented to camera).
Questions:
How can I completely disable or override the ML2’s head tracking to use external position data from OptiTrack?
Is there a specific API or approach that allows manual control of the camera’s world position without the XR system overriding it?
Any help would be greatly appreciated!
Unity Editor version: 2022.3.67.f2
ML2 OS version: 1.12.0
Unity SDK version: 2.6.0
Host OS: Windows
Error messages from logs (syntax-highlighting is supported via Markdown):
]]>But just to clarify, this issue is caused by changed made by the Unity Engine in version 6 and above. That’s why downgrading your project to a non Unity 6 version helped the resolve the issue.
Unfortunately this isn’t something that Magic Leap can solve alone since it’s a Unity Engine issue, not Magic Leap specifically.
Please let me know if you need any further assistance and best of luck to you with your project!
Best,
Corey
As part of our evolving journey and in response to changing market dynamics, we are focusing on deeper technology partnerships to create the next-generation of AR solutions and discontinuing sales of Magic Leap 2 globally in March 2026. Please contact your reseller for specific order cutoff dates.
We are deeply grateful for your partnership and the impact we’ve made together with Magic Leap 2.
We are not announcing further product plans at this time.
]]>Two quick question about eye-image capture and data export in ML2.
1, is 30Hz the maximum eye camera image caption rate or there is a way to increase it further.
2, Downloading large numbers of eye images from the hub is slow. Are there features that can accelerate this? Is it possible to export the eye captures as a video?
Thank you very much!
Zipai
]]>We recently received information from our hardware distributor that Magic Leap 2 devices are no longer available for sale in Japan due to a halt in production.
We’ve also heard similar rumors elsewhere, so we wanted to ask the community (or Magic Leap team directly) for clarification. Is this a temporary situation, or has production and sales of Magic Leap 2 been permanently discontinued?
Any official confirmation or additional insight would be greatly appreciated.
Thank you!
]]>It seems to be caused by a performance overload. The app I am working with can cause extreme lag. It’s not a complete app and this is unintended of course, but at some point, this causes the controller to disconnect. The tracking does not show in the app or home menu. I can get home by the wrist shortcut.
It’s almost like there is an overload that causes some form of tracking or controller connection thread or something along those lines to stall or crash.
Sometimes it can come back by simply waiting, (too long) but this is very rare. It’s not consistent, some times I’ve waited a literal 5 minutes to see if waiting is viable. It’s not.
Restarting the controller fixes it 9/10 of times.
That last 1/10 times requires a full system restart.
]]>ArucoLength the positional prediction is less accurate than length estimation. Any idea why that might be? ArucoLength is measured in meters right? And it is measured from its left-most black edge to its right-most black edge? ]]>