Building a low cost DIY AR Headset
A few years back, Lenovo released an AR headset for under $200. You can now find the headsets for under $50 on ebay. The optics are pretty decent, estimate >60 degree FoV, minimal ghosting, tracking is not perfect, but for the price, it is pretty impressive. The only issue for me was that there was no SDK to use with Unity so I couldn’t create my own games for it. After playing around with the headset for a bit, I learned about its ‘bird bath’ optical combiner, which are essentially spherical mirrors that are in line with the viewing axis. This makes the distortion fairly easy to compute, and I decided to give it a try to see if I could create my own SDK for the Lenovo headset. Fortunately for me, I didn’t need to create this from scratch since there is an open source project called HoloKit that does a lot of the work for me. A few weeks later and some tinkering I had all the right parameters and was able to get pretty close to pixel accurate stereo vision on my cheap AR headset.
I still needed tracking though and the built in cameras wouldn’t work since they required a proprietary driver. I experimented using HTC vive tracker over the network and got that to work okay. The main issue is motion-to-photon latency when going over a network is pretty poor > 60ms. In order to achieve hololens style tracking, I would need to get this to be under 20ms. I began experimenting with pose-prediction, and modifying the render pipeline, similar to the way Oculus does Asynchronous Time Warp. I was able to predict the pose by between 2-3 frames which meant a effective reduction in latency by 16msx3=48ms. This was not bad, except when making quick movements with your head, the pose prediction fails and the result is objects that seem to move around in air and the sense of presence is lost.
In order to do better, I would need to develop my own tracking solution. This is not as easy as it sounds. One of the reasons Magic Leap and Hololens tracking is as good as it is, is because they have multiple sensors and are able to integrate the drivers at a very low level with the hardware, giving the tracking a higher priority on the CPU than other processes. Since I will be building my tracking at the application level, I’m already at a disadvantage. After pretty extensive research into various SLAM algorithms (Simultaneous Localization and Mapping), I opted for Visual Odometry using Open CV which I was able to implement in Unity with a little work. This allows me to achieve 60fps tracking, which means 16ms motion-to-photon latency without applying pose prediction. When I apply pose prediction, the tracking is nearly perfect and I’m able to achieve nearly hololens style tracking on a low cost <$100 headset!!
What does the final result look like? I made a video through the lens, so you are seeing actual motion-to-photon and not a video stream through software that can sometimes correct for latency.
Now that I had tracking solved, I wanted to add controllers. I happen to have access to HTC Vive controllers and trackers and so I began experimenting with those. The trick here is that I need to use the same world coordinates as the headset, and for this I needed to come up with a calibration procedure. I decided on using a 4-point calibration, which is better than 2-point, but of course, more points is better but also takes more time. By recording 4 points in terms of position and orientation, I can compute the transformation that converts one world coordinates into calibrated world coordinates. I found a library Mathematics.Net which provides me with SVD (Singular Value Decomposition) which is the technique to solve the parameters. Now that I can correct the world coordinates, everything lines up perfectly, I’m able to use the built in head tracking using the VO algorithm along with Vive trackers. The head tracking is extremely fast, no perceivable latency, and the controller latency is less critical, and very functional.
If I get time, I’ll package the project up and either share on github or asset store.