Quantcast
Channel: Leap Motion Blog
Viewing all articles
Browse latest Browse all 481

The Alignment Problem: How to Position Cameras for Augmented Reality

$
0
0

To scale or not to scale? When it comes to augmented reality, the right camera alignment and scale are absolutely essential to bridging the gap between real and virtual worlds. And as developers are already experimenting with image passthrough and hybrid reality resources like Image Hands, this is more important than ever.

Based on our ongoing research and testing, we’ve updated our VR Best Practices to recommend one-to-one alignment between the real-world and virtual cameras, and created a demo to let you experience the difference. Here’s why.

The Challenge: IPD vs. ICD

Your brain creates a sense of depth and scale perception based on the distance between your pupils – the interpupillary distance (IPD). The average human IPD ranges from 54–68 mm, while our device’s inter-camera distance (ICD) is 40 mm.

As a result, the only way to provide images of the physical world that correspond with perceived scale is to capture those images with a camera separation that matches your own. While modules like Dragonfly will have a 64 mm ICD to match an average human IPD, what’s the best approach for today’s hardware?

To let you see how this works, we’ve created a simple demo that compares camera alignment vs. player/world rescaling. Launch the demo and hit “Enter” to move through the scenes and see how it affects the behavior of virtual objects and the image passthrough. (The first scene you’ll see is the correct one, but it’s more fun to skip it for now!)

 red-x Can we reduce the 3D scale of the physical world by changing the 2D images? This is not possible, since the perceived scale of the external world is determined by the distance between the viewpoints. Furthermore, rescaling the 2D images actually has the effect of zooming in, which would yield a mismatch in the viewing angle of the real and virtual scene.
 red-x How about reducing the scale of the player relative to the virtual world? With this approach, we’d increase the scale of the virtual world by a factor of 1.6 (IPD/ICD), while still keeping the CFS and IPD the same. This might work, but only if the user’s point of view in the virtual world cannot move, or if all virtual objects are expected to move relative to the user. This is what we did with VR Intro to align the hand skeleton with the hand images. However, this approach fails when the objects do not move with the user’s head.
 dark-green-check-mark-hi What if we aligned the cameras? So far, we’ve seen that scaling doesn’t work. The final and simplest approach is to match the CFS with the ICD, which will have the effect of changing the perceived scale of the virtual world. Having made this change, when your head moves, the virtual objects remain aligned with physical ones. In effect, the user experiences a subtle shift in their perceived IPD and adapts to the change.

LeftoRightImages
3D perspective is a tricky concept to communicate in 2D form, so be sure to try the demo and see for yourself.

Based on our user testing, we’ve discovered that aligning the cameras is actually the most seamless solution. Developers can build without worrying about relative scaling, while user’s brains adjust more readily to 1:1 alignment than to scaling changes.

With this insight, which is also reflected in our demo scenes in the Unity Core Assets, you can now build hybrid reality experiences that bring virtual objects and the real world in sync. We’ve seen some really incredible momentum from the community in this space over the last year, and as our platform continues to evolve, we’ll keep you updated about the latest resources and best practices.

Photo credit: jepoycamboy, Flickr

The post The Alignment Problem: How to Position Cameras for Augmented Reality appeared first on Leap Motion Blog.


Viewing all articles
Browse latest Browse all 481

Trending Articles