Quantcast
Channel: Leap Motion Blog
Viewing all 481 articles
Browse latest View live

Leap Motion Goes Mobile

$
0
0

Working in Silicon Valley, I’ve come to realize that everything that seems futuristic was actually on the drawing board a startling number of years in the past. Technology is like a flash in the darkness – long nights building towards an instant when suddenly everything is different. That’s why I think all of us work in technology, and why we at Leap Motion are always very much living in the future.

I come from the future now to tell you a bit about what’s just over the horizon. Today I’m going to be talking about the Leap Motion Mobile Platform. This is a combination of software and hardware we’ve made specifically for untethered, battery-powered virtual and augmented reality devices, many of which will run off the same types of processors already used in smartphones around the globe.

The challenges to build a tracking platform in this space have been immense. We needed to build a whole new Leap Motion sensor with higher performance and much lower power. We needed to make the most sophisticated hand tracking software in the world run at nearly 10 times the speed all while making it smoother and more accurate than ever before.

img_0396-edit

reference-fovAt the same time, we wanted to address the biggest request from the VR community, which is field of view. While our PC peripheral remains unrivaled at 140×120 degrees, we’ve found in virtual reality there is reason to go even further. So we’ve built our next-generation system with the absolute maximum field of view that a single sensor can support on a VR headset, which is 180×180 degrees.

Last, we’ve built a reference system on top of the Gear VR that we’re shipping to headset makers around the world to show how this all comes together into a single cohesive product. Starting this month, we’ll be demoing this system at major VR events with an enhanced version of our Interaction Engine and flagship Blocks demo.

reference-1e

This is the beginning of an important shift towards mobile and ubiquitous wearable displays that will eventually be as easy and casual to use as a pair of glasses. The ultimate result of which will be the merging of our digital and physical realities.

The post Leap Motion Goes Mobile appeared first on Leap Motion Blog.


Designing Physical Interactions for Objects That Don’t Exist

$
0
0

Explorations in VR Design is a journey through the bleeding edge of VR design – from architecting a space, to designing groundbreaking interactions, to making users feel powerful.

Virtual reality is a world of specters, filled with sights and sounds that offer no physical resistance when you reach towards them. This means that any interactive design with hands in VR has to contend with a new version of one of the world’s oldest paradoxes. What happens when an unstoppable force (your hand) meets an imaginary digital object?

In traditional desktop and console gaming, human movement is often arbitrarily limited. Your character can encounter unswimmable water or run up against a wall, and you don’t think twice about it. But in VR, human movement cannot be arbitrarily limited in VR without resulting in a disastrous loss of immersion. The conventional boundaries of traditional game design no longer apply.

In some cases, like walls or tables, we can address this challenge by having the hands simply phase through virtual matter. But that’s not why we want hands in VR. The user’s very presence within the scene creates the expectation that things can be interacted with. Touchless interaction in VR is an unprecedented design challenge that requires a human-centered solution.

This is the philosophy behind the Leap Motion Interaction Engine, which is built to handle low-level physics interactions and make them feel familiar. The Interaction Engine makes it possible for you to reach out and grab an object, and it responds. Grabbing it in your hand, your fingers phase through the material, but it still feels real. You can grab objects of a variety of shapes and textures, as well as multiple objects near each other that would otherwise be ambiguous.

This kind of experience is deceptively simple. But when these basic physical interactions are done right, they can prove more powerful and compelling than explosive graphics or exquisite world design – because they can disrupt the player’s expectations and take them to surprising new places. In this exploration, we’ll look at the physical and interactive design of objects in VR. Many of these principles also apply to user interface design, which is handled in next week’s exploration.

Building Affordances

In the field of industrial design, “affordances” refers to the physical characteristics of an object that guide the user in using that object. These aspects are related to (but distinct from) the aspects indicating what functions will be performed by the object when operated by the user. Well-designed tools afford their intended operation and negatively afford improper use.

The classic example of an affordance is the handle on a teapot, which is designed to be easy to hold, and exists to prevent people from scorching their fingers. Conversely, the spout is not designed to appear grabbable. In computer interface design, an indentation around a button is an affordance that indicates that it can be moved independently of its surroundings by pressing. The color, shape, and label of a button advertise its function. (Learn more in our post What Do VR Interfaces and Teapots Have in Common?)

Solid affordances are critical in VR interactive design. They ensure that your users understand what they can do, and make it easier for you to anticipate how your demo will be used. The more specific the interaction, the more specific the affordance should appear. In other words, the right affordance can only be used in one way. This effectively “tricks” the user into making the right movements. In turn, this makes the object easier to use, and reduces the chance of error.

close1_macro

For example, a dimensional button indicates to the user to interact with the affordance by pushing it in (just like a real world button). If the button moves expectedly inward when the user presses the button, the user knows they have successfully completed the interaction. If however, the button does not move on press, the user will think they haven’t completed the interaction.

For inspiration, look for real-world affordances that you can reflect in your own projects. There are affordances everywhere around you, and these have often been appropriated by digital designers in making their creations easier to use. Below are a few examples.

Baseballs and Bowling Balls

bowling-ballbaseball

weightless-space-prototype-4

weightless-space-prototype-5

In designing Weightless: Remastered, Martin Schubert found that everyone tends to grab basic shapes in different ways – from holding them lightly, to closing their fists through them. He took inspiration from baseballs and bowling balls to suggest how the projectiles in the Training Room should be held.

This led to a much higher rate of users grabbing by placing their fingers in the indents, making it much easier to successfully release the projectiles.

1-yybkuqdoxwgltn8fjp2g4qWindow Blinds

To close some types of blinds, you reach up and pull them down over the window. In a similar way, the Notification Center on iOS can be summoned by dragging it down from the top of the screen. The similarity of these interactions suggests that the Notification Center is a temporary state that can be drawn out and retracted at will. This would also be a good foundation for a pinch-based interaction.

Doorknobs and Push Bars

Doorknobs fit comfortably in the palm of your hand, while push bars have wide surfaces made for pressing against. Even without thinking, you know to twist a doorknob. In both cases, the objects are firmly attached to the door, suggesting that the interaction does not involve removing them from the door.

Skateboard Prevention Measures

Ever seen small stubs along outdoor railings? These are nearly invisible to anyone who doesn’t want to grind down the rail – but skaters will find them irritating and go elsewhere. You might want to include negative affordances that guide users away from certain interactions.

Mouse Buttons vs. Touch Buttons

Mouse-activated buttons look small enough to be hit with your mouse, while touchscreen buttons are big enough to be hit with your finger. The size of an object or its constituent parts can suggest what kinds of physical interactions will work with it.

Everything Should Be Reactive

People enjoy playing with game physics, pushing the boundaries and seeing what’s possible. We start this as babies, playing with objects to gain an understanding of how real-world physics work. Physics interactions help you build a mental model of the universe and feel that you have agency within it.

With that in mind, every interactive object should respond to any casual movement. This is especially true with any object that the user might pick up and look at, since people love to pick up and play with tools and other objects. As Matthew Hales points out in Hands in VR: The Implications of Grasp, “Like a typical two-year-old, if [the player] can reach it, they will grab it, and when they grab it they are likely to inspect it closely…. Objects within the near zone trigger our innate desire to examine, manipulate, and assess for future utility.”

I Expect You To Die creator Jesse Schell reached a similar conclusion when he wrote on Gamasutra that “you are wiser to create a small game with rich object interactions than a big game with weak ones.” (See also Fast Co. Design’s piece Google’s 3 Rules For Designing Virtual Reality and the Daydream Labs talk at Google I/O 2016 for insights on designing delightful or impossible interactions for hands.

When done effectively, people will feel themselves anticipating tactile experiences as they interact with a scene. However, if an object appears intangible, people have no mental model for it, and will not be as able to reliably interact with it.

Taken a step further, you might want to make certain elements of the experience responsive to the user’s gaze. This reduces visual clutter and reinforces which elements of the scene are fully interactive. (We did this with the sci-fi menus in our VR Cockpit demo. See also Jonathan Ravasz’s post Design Practices in Virtual Reality.)

Hand-Occluded Objects

In the real world, people routinely interact with objects that are obscured by their hands. Normally, this is achieved by using touch to provide feedback. In the absence of touch, here are some techniques that you can use:

  • Provide audio cues to indicate when an interaction is taking place.
  • Make the user’s hand semi-transparent when near UI elements.
  • Make objects large enough to be seen around the user’s hand and fingers
  • Avoid placing objects too high in the scene, as this forces users to raise their hands up and block their view.
  • When designing hand interactions, consider the user’s perspective by looking at your own hands in VR.

Above all, it’s essential to dream big and look beyond. Unbounded by the laws of gravity, objects in VR can take any form we choose. Design cues can extend far beyond traditional game design, and into the physical world – retail spaces and magazines, origami and physical interfaces. This is industrial design on a digital scale, and it’s bigger than we ever imagined.

It’s time for designers to think beyond skeuomorphs and flat design, and towards essential cues inspired by how humans understand the world. Next week’s exploration dives into user interface design.

The post Designing Physical Interactions for Objects That Don’t Exist appeared first on Leap Motion Blog.

Beyond Flatland: User Interface Design for VR

$
0
0

Explorations in VR Design is a journey through the bleeding edge of VR design – from architecting a space, to designing groundbreaking interactions, to making users feel powerful.

In the novel Flatland, a two-dimensional shape’s entire life is disrupted when he encounters a creature from another dimension – a Sphere. The strange newcomer can drop in and out of reality at will. He sees flatland from an unprecedented vantage point. Adding a new dimension changes everything.

In much the same way, VR completely undermines the digital design philosophies that have been relentlessly flattened out over the past few decades. Early GUIs often relied heavily on skeuomorphic 3D elements, like buttons that appeared to compress when clicked. These faded away in favor of color state changes, reflecting a flat design aesthetic.

Many of those old skeumorphs meant to represent three-dimensionality – the stark shadows, the compressible behaviors – are gaining new life in this new medium. For developers and designers just breaking into VR, the journey out of flatland will be disorienting but exciting.

folder-options

Windows users in 1992 needed 3D effects on buttons to understand that they were meant to be pressed, just like buttons on other media like radios, televisions, and VCRs. In 2016, active and passive states in the OS are communicated entirely through color states – no more drop shadows. All major operating systems and the modern web are now built with a flat minimalist design language.


But this doesn’t mean that skeuomorphism is the answer – because the flat-skeuomorphic spectrum is just another form of flat thinking. Instead, VR design will converge on essential cues that communicate structure and relationships between different UI elements. “A minimal design in VR will be different from a minimal web or industrial design. It will incorporate the minimum set of cues that fully communicates the key aspects of the environment.”

A common visual language will emerge, much as it did in the early days of the web, and ultimately fade into the background. We won’t even have to think about it.

The interface in Quill by Oculus builds on the physical skeumorphs of traditional PC design to create a familiar set of cues. As Road to VR’s Ben Lang writes, “the interface looks charmingly like something out of the early days of the first GUI operating systems, but what’s important is the fact that the interface takes known PC affordances and applies them easily and effectively in VR.”

UI Input Module

The design process behind the UI Input Module was driven by many of these insights. In turn, they continue to inform our other bleeding-edge internal projects. The UI Input Module provides a simplified interface for physically interacting with World Space Canvases in Unity’s UI System. This makes it possible for users to reach out and “touch” UI elements to interact with them.

uiwidgets_1

Below is a quick analysis of each UI component included in the UI Input Module. In each case, sound plays a crucial role in the “feel” of the interface.

Button

Each button can easily be distinguished as interactive, with 3D effects such as drop shadows. The size and spacing of the buttons makes triggering them easy. When your hand comes close to the interface, a circle appears that changes color as you approach. When you press the button, it compresses and bounces back, with a color state change suggesting that it’s now active. At the same time, a satisfying “click” sound signals that the interaction was a success.

Slider

Much like the button, the slider features a large, approachable design. Changing colors, shadows, sound effects, and a subtle cursor all continuously provide feedback on what the user is doing.

Scroll

With the scroller, users have the ability to move the content directly instead of attempting to target a small, mouse-style scrollbar (though they can if they want to). Naturally, the scrollbar within the widget indicates your position within the accessible content. Sound plays a role here as well.

Interactive Element Targeting

Appropriate scaling. Interactive elements should be scaled appropriate to the expected interaction (e.g. full hand or single finger). One finger target should be no smaller than 20 mm in real-world size, and preferably bigger. This ensures the user can accurately hit the target without accidentally triggering targets next to it.

Limit unintended interactions. Depending on the nature of the interface, the first object of a group to be touched can momentarily lock out all others. Be sure to space out UI elements so that users don’t accidentally trigger multiple elements.

Limit hand interactivity. Make a single element of the hand able to interact with buttons and other UI elements – typically, the tip of the index finger. Conversely, other nearby elements within the scene should not be interactive.

Wearable Interfaces

Fixing the user interface in 3D space is a fast and easy way to create a quick, compelling user experience. Floating buttons and sliders are stable, reliable, and easy for users to understand. However, they can also feel obtrusive, especially when their use is limited.

At Leap Motion, we’ve been experimenting internally with a range of different interfaces that are part of the user. This “wearable device” can be locked to your hand, wrist, or arm, and revealed automatically or through a gesture.

(Interestingly, demos like The Lab, Job Simulator, and Fantastic Contraption use an internalization mechanic – grabbing and “consuming” something in the environment to trigger a change, such as teleporting to a new environment or exiting the game. This is just one of many ways to bring the user’s sense of self deeper into VR.)

orion_6

A simple form of this interface can be seen in Blocks, which features a three-button menu that allows you to toggle between different shapes. It remains hidden unless your left palm is facing towards you.

These early experiments point towards wearable interfaces where the user always has instant access to notifications and statuses, such as the time of day. More powerful options may be unlocked through a trigger gesture, such as tapping a virtual wristwatch. By combining our Attachments Module and UI Input Module, it’s possible to build a wearable interface in just a few minutes.

interfacemodule-hovercast-leapmotion

Zach Kinstner’s Hover UI Kit is another approach to wearable interface design. With it you can quickly create a beautiful, customizable, dynamic UI. Tap buttons on your palm to summon or dismiss the menu, or go back. Select menu items beyond your hand to access and configure options.

The design features dynamic feedback and a finger cursor that continually suggests how the interface can be used, and what it’s currently doing. The Hover UI Kit is available from Zach Kinstner’s GitHub page. Try the basic menu demo from our gallery, or the new Force-Directed Graph to see how you could interact with data in VR.

Experimental UI

The UI Input Module also includes some experimental features that extend beyond the physical metaphor of direct interactions. One of these features is Projective Interaction Mode. By raising your hand, you can summon a cursor over a faraway menu, then interact with it using the pinch gesture. Another mode gives users telekinetic powers so they can interact with objects at a distance.

uiwidgets_2

We describe these features as “experimental” because unlike the buttons and sliders that you can instantly reach out and press, it’s not always obvious to a new user how these more abstract modes work. Once the user understands the basic concept, the interactions tend to be smooth and fluid. But it’s the first step that’s the hardest. For this reason, we strongly encourage including tutorials, text cues, and other guides when developing with these modes.

All design is a form of storytelling. To take your users out of flatland, you need the right narrative to drive their interactions and help them make sense of their new universe. Next week: Storytelling and Narrative in VR.

The post Beyond Flatland: User Interface Design for VR appeared first on Leap Motion Blog.

The Art of Storytelling and Narrative in VR

$
0
0

Explorations in VR Design is a journey through the bleeding edge of VR design – from architecting a space and designing groundbreaking interactions, to making users feel powerful.

Stories are how we make sense of the world. One of the most effective ways to draw a user into any experience you build is to provide a story. Just ask yourself three questions: (1) What is the user doing? (2) Why are they doing it? (3) And how will they discover what “matters,” i.e. what’s worth doing?

Much of the narrative behind an experience will be told in the first few seconds, as the user becomes accustomed to the scene. The world and sound design should all reinforce a core narrative – whether you want to give your user the powers of a god, make them feel tiny, or build an emotional connection with a lonely hedgehog.

Learning and Exploration

As we mentioned in Object Interaction Design, reaching out and playing with what we see is how we learn about the world. If there’s something immediately in front of the user, it will be the first thing they try to grab. From there, everything is a learning process.

Tutorials and instructions can offer essential guidance to first-time VR users.This builds and reinforces the core interaction loop and draws users into the world. Leap Motion’s Blocks, Weightless, and VR Cockpit all incorporate their tutorials into the narrative itself. We’ve already mentioned how this works with Blocks’ tutorial robot, so let’s look at the narrative development in Weightless and VR Cockpit.

In the case of Weightless, Martin Schubert started with the idea of weightless objects floating around in space. He then took it a step further by adding the core interaction — sorting space debris into two categories. Why? Because you are a commander aboard a salvaging space station. How do you know this? Because the environment sets the scene and the station’s AI guides you.

weightless

The demo starts with limited interaction. After you dash the letters WEIGHTLESS into space, you are transported to a darkened space station. While you’re rooted in one spot, the station AI talks you through the interactions and your mission. First you can press buttons, then the shutters open to reveal the world outside, then you can fly through the station.

Another example of how content developers can integrate tutorial guidance with story is VR Cockpit. The interfaces in this experience include many buttons that are reactive to user gaze.

cockpit4

To ensure that new users can navigate the experience, a helper robot trains new pilots based on what happens in real time.

1-iyzrzo1f23r907sapxvg_aRecognizing that helper NPCs can be irritating, we also made it possible for you to smack the robot across the room. (Don’t worry – his circuits are well-insulated.)

In all three cases, the tutorial characters make sense inside the narrative, while also driving it forward. They respond to your actions. Most significantly, they all communicate through voice – the most reliable way to learn new interactions and concepts.

Player Attention

Directing the viewer’s attention is a broad challenge in VR, where 360° of freedom can lead to navel-gazing while the drama unfolds elsewhere. World design, sound design, and a wide range of other cues will guide most users’ attention through the space. Gaze itself can be a powerful mechanic – driving the story forward or illuminating new possibilities.

sightline_chair
Gaze is the key interaction in Sightline: The Chair. Look away, and the world changes.

The film technique of storyboarding can help in building compelling narratives that capture attention within the VR field of view. Vincent McCurley’s post Storyboarding in Virtual Reality covers several techniques informed by Mike Alger’s work on VR interface design. Many developers are also using Oculus Medium to rapidly prototype scenes before committing to more robust scene designs.

Character Presence

Hand presence brings you into VR in a fundamental way. The player’s hands might reinforce presence within the scene, enable basic controls, or drive the story by affecting objects and events. It all depends on what kind of story you want to tell. But since the player now has a physical presence within the scene, you need to identify what that means from a story perspective.

Storytelling in VR tends to take one of four forms, based on whether or not the player exists within the story (embodied character vs. omniscient ghost), and whether they have any impact on the story as it unfolds.

character-presence

Although it’s one of the defining features of VR, presence needs to be reinforced and sustained by letting the user feel that their actions have an impact on the world. At the same time, too much freedom or impact can disrupt the story you’re trying to tell. Oculus Story Studio writes about navigating this tightrope in their post The Swayze Effect:

“The inclusion of Henry’s ‘look at’ behavior (Henry glancing over and locking eyes with you during emotional moments) wound up being one of the more popular aspects of the experience….

Despite this, there was still contention among team members of whether the ‘look at’ behavior was the right move to make. There was still a dissonance between presence and story: why is Henry so lonely if I’m sitting right here with him? For this reason, many members of the team still consider Henry a flawed narrative.”

sad-henry
For many people, Henry is the first character they ever met in VR. The user’s presence within Henry’s house is a little ambiguous, as the lonely hedgehog occasionally breaks the fourth wall.

Just as the position of the camera within traditional film frames your understanding of the scene, the user’s position within the scene can dramatically shift how they think about the story as it unfolds. Position and visual scope within the virtual space defines the role or “mask” you assume. It also defines how much narrative significance you assign to everything you see – from small objects, to characters, to the overall mood and lighting.

Whether your user is an explorer, an invited guest, a voyeur, or a carefully defined role within a story, the hand design can serve as an essential cue before the plot starts progressing. At the very least, it should not run counter to the narrative.

Interface Design

Just like the player character, VR user interfaces can be diegetic, existing within the game world and within the narrative. Or non-diegetic, existing outside the game world. (This post on diegesis theory has a good overview of this topic in desktop game design.)

Your project may include either or both (e.g. classic first-person shooters often combined guns with HUDs). If they exist within the game, they should fit the world you’ve created. If they exist outside it, then your users will probably want to be able to dismiss or summon them at will.

Since hands are a part of ourselves, we also tend to think of them as distinct from the rest of the environment. For narrative purposes, you may want to preserve this distinction – with a wearable interface changing your own state, and an environmental interface changing the environmental state. (Alternatively, you can subvert this rule to serve your narrative!) This is the distinction we used in Blocks, where the hand interface changed the shapes that spawn from your hands…

orion_7

…while a broad sweeping motion turns off gravity – a force that exists outside yourself and “in the world.”

orion_8

Emotional Connections

Henry’s emotionally weighty glances captured the hearts of his audience. They also captured Oculus Story Studio a well-deserved Emmy.

screen-shot-2016-09-08-at-8-28-43-am

Another way to designer for emotional connection with characters is through hand interactions. In the real world, we use our hands to convey meaning, from a simple wave to an angry finger. New media designer Jeff Chang has experimented with therapeutic virtual animals that respond to users’ hand movements.

itadakimasu

In Notice Me Senpai and Itadakimasu, his aim is for every motion and animation to elicit an emotional response. In the latter, the animals are highlighted by the user’s gaze and respond in different ways to different gestures.

Crafting the Journey

Earlier this week in an interview on the ResearchVR podcast, Astrid Kahmke pointed out that the fall of the fourth wall in VR makes users vulnerable in ways that are unprecedented. With VR, we are not empathizing with another character who acts as our deputy in the world – we are now the characters inhabiting the space.

As creators, we are responsible to our users in ways that can be therapeutic, awe-inspiring, or horrifying. We must also come to grips, Kahmke says, with a new approach to narrative. Storytelling has a beginning, middle, and end, but virtual reality involves a shift from time-based narration to spatial narration. From storytelling to world-building. Narrator to creator. Linear to nonlinear.

At the same time, we can lean into deeper patterns that we’re already familiar with – the monomyth that resonates everywhere in fiction. This familiar structure can ground your users in new places and situations, building on powerful examples including Star Wars, The Matrix, Lord of the Rings, and even games like Undertale. From Christopher Vogler’s The Writer’s Journey:

  1. The Ordinary World. The hero, uneasy, uncomfortable or unaware, is introduced sympathetically so the audience can identify with the situation or dilemma. The hero is shown against a background of environment, heredity, and personal history. Some kind of polarity in the hero’s life is pulling in different directions and causing stress.
  1. The Call to Adventure. Something shakes up the situation, either from external pressures or from something rising up from deep within, so the hero must face the beginnings of change. (According to Kahmke, people choosing to enter VR are naturally at this stage. VR is an adventure that takes us beyond the ordinary.)
  1. Refusal of the Call. The hero feels the fear of the unknown and tries to turn away from the adventure, however briefly. Alternately, another character may express the uncertainty and danger ahead.
  1. Meeting with the Mentor. The hero comes across a seasoned traveler of the worlds who gives him or her training, equipment, or advice that will help on the journey. Or the hero reaches within to a source of courage and wisdom.
  1. Crossing the Threshold. At the end of Act One, the hero commits to leaving the Ordinary World and entering a new region or condition with unfamiliar rules and values.
  1. Tests, Allies, and Enemies. The hero is tested and sorts out allegiances in the Special World.
  1. Approach. The hero and newfound allies prepare for the major challenge in the Special world.
  1. The Ordeal. Near the middle of the story, the hero enters a central space in the Special World and confronts death or faces his or her greatest fear. Out of the moment of death comes a new life.
  1. The Reward. The hero takes possession of the treasure won by facing death. There may be celebration, but there is also danger of losing the treasure again.
  1. The Road Back. About three-fourths of the way through the story, the hero is driven to complete the adventure, leaving the Special World to be sure the treasure is brought home. Often a chase scene signals the urgency and danger of the mission.
  1. The Resurrection. At the climax, the hero is severely tested once more on the threshold of home. He or she is purified by a last sacrifice, another moment of death and rebirth, but on a higher and more complete level. By the hero’s action, the polarities that were in conflict at the beginning are finally resolved.
  1. Return with the Elixir. The hero returns home or continues the journey, bearing some element of the treasure that has the power to transform the world as the hero has been transformed.

As a new medium of expression, VR has the power to unlock new kinds of narratives that put the audience in the midst of the action – creative artists, silent witnesses, new friends, explorers on the edge of a new reality. What story will you tell?

So far we’ve focused a lot on the worlds that our users will inhabit and how they relate to each other. But what about the users themselves and the new forms they inhabit in their new reality? Next week: Avatar Design.

The post The Art of Storytelling and Narrative in VR appeared first on Leap Motion Blog.

From Orion to Mobile VR: Leap Motion in 2016

$
0
0

2016 was a landmark year for virtual reality, but 2017 will be nothing short of surreal. As we look to CES and beyond, it’s also a good time to look back. Here are the top 10 stories from our blog in 2016.


The future of #VR is mobile. This week @LeapMotion will be at @CES w/ their next-gen reference design
Click To Tweet



Technology is like a flash in the darkness – long nights building towards an instant when suddenly everything is different.
Click To Tweet


Leap Motion goes mobile

DEC. 5: The next generation of mobile VR headsets will feature new sensors with higher performance, lower power, and 180×180 degrees of tracking.

Our team will be at CES January 5-8 with our Leap Motion Mobile Platform reference design. You can join us on the showfloor or follow @LeapMotion to experience the future of VR/AR.
 

Orion: next-generation hand tracking for VR

FEB. 17: The rise of VR means that our dreams of interacting with digital content on a physical level are coming to life. But to make that happen, you need a more natural interface. You need the power and complexity of the human hand.


Interacting with digital content on a physical level starts with the human hand.
Click To Tweet


This is why we created the Orion software, which is built from the ground up to tackle the unique challenges of hand tracking for VR. The reaction from our community was incredible:
  • The Orion beta makes it feel like new hardware. The Blocks demo is mind-blowing. (Reddit)
  • Never before has the expression shut up and take my money felt so apt. (Inverse)
  • THE HYPE. IS. REAL. (Reddit)

  • “I spent an hour in this last night. An hour. Playing with blocks. It was amazing.” – @imgur
    Click To Tweet


    Thank god I was alone at home because I was straight scream-giggling for fifteen minutes until there were tears in my eyes. (Reddit)
  • I spent an hour in this last night. An hour. Playing with blocks. It was amazing. (Imgur)

Five releases later, we continue to improve our hand tracking technology for the next generation of headsets.
 

Redesigning our Unity Core Assets

MARCH 2: To match the new capabilities of Leap Motion Orion with the performance demands of VR, we gave our Unity toolset an overhaul from the ground up.
 


How an indie #LeapMotion project became part of #UE4: http://bit.ly/1pSVeoQ
Click To Tweet


Leap Motion VR support directly integrated in Unreal Engine

MARCH 31: Originally an independent project, getnamo’s plugin got an Epic seal of approval with an official integration in Unreal Engine 4.11. Stay tuned for more updates in 2017.
 


How to autorig #LeapMotion #VR hand assets in less than two minutes: http://bit.ly/2irhaId
Click To Tweet


New hands in the Unity Core Assets

JUNE 1: The Hands Module adds a range of example hands to your Unity toolkit. Version 2.0 features an autorigging function so you can bring hand designs to life in two minutes or less.
 

Unity Module for user interface input

JUNE 11: Featuring buttons, toggles, sliders, and experimental interactions, the UI Input Module makes it simple to create tactile user interfaces in VR.
 

Introducing the Interaction Engine


By exploring grey areas between real-world and digital physics, we can build a more human experience.
Click To Tweet


AUG. 23: Game physics engines were never designed for human hands. But by exploring the grey areas between real-world and digital physics, we can build a more human experience. A virtual world where you can reach out and grab something – a block, a teapot, a planet – and simply pick it up.

However, the Interaction Engine unlocks more than just grabbing interactions. It also takes in the dynamic context of the objects your hands are near. Learn more in Martin Schubert’s post on building with the Interaction Engine.
 

Unity Core Assets 101: How to start building your VR project

SEPT. 2: This rapid overview of our Unity Core Assets and Modules covers everything from custom-designed hands and user interfaces to event triggers.. Do you know what each Module can do?
 

VR prototyping for less than $100 with Leap Motion + VRidge

SEPT. 8: Breaking into VR development doesn’t need to break the bank. If you have a newer Android phone and a good gaming computer, it’s possible to prototype, test, and bring your hands into mobile VR.
 


The digital is taking substance in our reality. We are its artists, architects, and storytellers.
Click To Tweet


Explorations in VR Design

NOV. 17: Explorations in VR Design is a journey through the bleeding edge of VR design – from architecting a space and designing interfaces to making users feel powerful.

Explorations will continue through 2017 with sound design, avatars, locomotion, and more. See you in the new year!

The post From Orion to Mobile VR: Leap Motion in 2016 appeared first on Leap Motion Blog.

Top 8 Questions from CES 2017

$
0
0
This week we hit the CES showfloor in Las Vegas with two missions: share our Mobile VR Platform with the world and play “spot Leap Motion in the wild.”

From our home base at the MCNEX booth, we’ve heard some great questions about our technology, roadmap, and vision for the future. Here are the 8 most frequently asked questions we’ve heard so far:

1. What is Leap Motion?

We make technology for VR/AR that tracks the movement of your hands and fingers. It features high accuracy, low processing power, and near-zero latency. All three are crucial for hands in VR.

2. Why hands, and not physical controllers?

We’re not philosophically opposed to using controllers for some types of games. But the magic and power of immersive VR comes from a sense of presence and full embodiment. A direct experience, unmediated by plastic.


#MobileVR and #AR shouldn’t involve carrying a pair of physical controllers with you everywhere.
Click To Tweet


Physical controllers are also poorly suited for mobile VR and AR. Who wants to carry two extra pieces of hardware everywhere – especially with no external sensors to track them?

3. What is the Mobile VR Platform?

A combination of software and hardware we’ve made specifically for untethered, battery-powered VR/AR devices. At its core is a new sensor capable of 180×180 degree hand tracking, and software that runs at nearly 10x the speed of our existing controller.

Click To Tweet

4. When will it be available?

We’re working with a variety of OEMs to embed our technology directly into their headsets. The timeline depends a lot on when their headsets come to market.

5. Why not just release a faceplate?

We hear this question a lot, and it deserves a full answer. Since we work directly with OEMs, it will take a little longer to get this technology out to the world than if we released a mobile faceplate. The wait is painful for us too, but we think ultimately worthwhile for the VR community as a whole.

We want to make magical experiences possible. That happens when our tech becomes an invisible part of the design. Releasing a new standalone device would run against that mission because it would keep us firmly rooted as an accessory rather than a fundamental input.


Embedded #LeapMotion tech means #MobileVR devs don’t need to worry about input fragmentation.
Click To Tweet


At the same time, input fragmentation is already a huge problem in VR. This is hard on developers, who are often forced to assume only a fraction of users own a particular input device. With an embedded input solution, developers can know that 100% of users have access to Leap Motion interaction.

With that in mind, we created a reference design for OEMs and VR industry events, so the world can get a sense of what a polished product with Leap Motion input looks and feels like.

6. Why mobile VR? Isn’t it seriously limited?


Every consumer technology hits the input barrier before it can truly transform the world.
Click To Tweet


Input is a fundamental existential question for virtual reality, and one of its most serious limitations. Every mass consumer technology, from personal computers to mobile phones, hits this barrier at some point. Making mobile VR feel natural and intuitive is the greatest step we can take towards taking it out of early adoption, and towards a transformative technology that can change how millions of people live and work.

It’s also true that there’s a lot of room for mobile VR to grow. Mobile processors have never needed to deal with the intense graphics requirements demanded by VR, and the major mobile VR SDKs continue to evolve. But we expect Moore’s law to hold true; as processing power increases and form factor decreases, we’ll continue to push the edge of what’s possible.

Ultimately we expect our technology will be embedded in a spectrum of form factors – ranging from mobile and all-in-one to PC and console. The Mobile VR Platform is aimed at those first two groups, mostly because they represent the next major wave of generation releases.

7. What’s happening with the Android SDK?


#MobileVR has a lot of room to grow, starting with how we interact with it.
Click To Tweet


Our focus right now is creating a high-quality product experience. The mobile and AIO VR/AR market is taking longer to develop and mature than many expected, so we’ve had to move alongside the major platforms and build towards a moving target.

For the mobile SDK, our intent is to have it take advantage of these radical new hardware capabilities. Behind the scenes, we’re continuing to build the tools and assets that developers will need once headsets become available. Anyone interested can subscribe for updates at developer.leapmotion.com/android.

8. Why just one sensor?

With today’s technology, multiple sensors would result in a bulky headset, increasing its complexity and power requirements. But sensors are getting smaller, cheaper, and more powerful every year. By the time we hit third-generation VR, your reality glasses will have constellations of tiny sensors with massively enhanced capabilities. And we can’t wait.

Did we miss any? Post your questions in the comments below! For more #CES2017 updates, follow us @LeapMotion. If you’re in Las Vegas this weekend, meet us at booth 36419, LVCC, South Hall 4.

The post Top 8 Questions from CES 2017 appeared first on Leap Motion Blog.

CES 2017: Mixing Realities, Concept Cars, and the #VoiceFirst Revolution

$
0
0

With our thoughts, we make the world.” –The Buddha

Technology has the power to open up new realities. CES 2017 was about making those realities more intelligent, interactive, and human – from imaginary worlds unfolding before our eyes to objects that can talk to you and each other.

Here’s a quick review of our CES journey and what it means for the months and years ahead.

One of our favorite games at CES was Spot the Leap Motion Controller. Our technology was everywhere from virtual concept cars by Volkswagen and Hyundai…

…to demos from headset manufacturers and content creators.

But the best part was the excited reactions we saw from people trying our Mobile VR Platform for the first time:

We also shared some insights on our roadmap and vision for our virtual future:

There was a lot going on at the show floor, but the biggest story from CES was a company that didn’t even have a presence there: Amazon. Their Alexa virtual assistant was a cornerstone in dozens of product experiences designed to make everything in our lives smarter and more responsive, from our homes to our cars.


Technology is becoming invisible, an ubiquitous part of our daily lives, baked into our environment.
Click To Tweet


The rise of voice recognition is part of a greater movement towards the disappearing interface.

Technology is becoming invisible, an ubiquitous part of our daily lives, baked into our environment. Tiny screens that demand our attention give way to objects and experiences that respond to our humanity. Our faces, our voices, and our hands all have a part in building bridges between human intention and technological power.

Above all, it’s crucial that as VR developers and designers we dare to dream big,

take inspiration from the world around us,

and create experiences that deliver on the true possibilities of our emergent technologies.

VR is more than glorified film – it’s a groundbreaking storytelling medium where the hero’s journey gives way to the interconnected storyworld, where individual agency and a mosaic of perspectives reign supreme.

https://twitter.com/sibleyspeaks/status/818690174297063424

The post CES 2017: Mixing Realities, Concept Cars, and the #VoiceFirst Revolution appeared first on Leap Motion Blog.

Universities Drive VR/AR Research with Leap Motion

$
0
0


Infographic: Universities lead in #VR #AR research, number of headsets is exploding, and 6/10 use #LeapMotion.
Click To Tweet


Universities are the earliest adopters of virtual and augmented reality. Even in the third age of VR, when the technology is small, inexpensive, and powerful enough that millions of people can own it, universities are still at the bleeding edge of VR research and development.

A recent survey of 553 universities in the VR First Network gives us an exciting look at what’s happening in the space, and what technologies the next generation of graduates are using right now.

Virtual reality technology was born in universities.

Virtual reality was born in universities and survived through multiple waves of innovation, failure, and rebirth.

Highlights: VR/AR in Universities

  • The number of VR headsets at universities has quadrupled in the last 6 months.
  • 71% of universities own augmented reality headsets. 43% of those are the Microsoft Hololens.
  • There are roughly as many Rift DK1 and DK2 units combined floating around as HTC Vives.
  • 59% of universities have Leap Motion Controllers. That’s more than 5 times any other input device!
  • Gaming and education dominate. However, 25% of universities have projects in psychology, history, healthcare, or cinematic experiences.
  • NVIDIA and Intel dominate GPUs and CPUs, respectively.

(Click for larger version, or download the PDF.)

Infographic from VR First about universities researching virtual and augmented reality.
Image credit: VR FIrst

The post Universities Drive VR/AR Research with Leap Motion appeared first on Leap Motion Blog.


Charting the Course for Mobile VR

$
0
0

This week, we’re excited to share our latest milestone on the road to truly compelling mobile VR. We’ve joined forces with Qualcomm Technologies to combine their Snapdragon 835 mobile platform with our embedded hand tracking technology so that people can interact with mobile VR content using their bare hands.


Your hands should be free to reach directly into  #MobileVR. @LeapMotion technology makes that possible #GDC17
Click To Tweet


In a nutshell, this means that developers will soon be able to tap into an incredible software and hardware ecosystem. It’s also the latest step towards a single input standard for the next generation of mobile VR. Next week, we’ll demo our Mobile VR Platform alongside Qualcomm in brand-new reference designs at GDC, VRDC, and MWC in San Francisco and Barcelona.

In just the last year, the pace of change in mobile VR has been incredible. More powerful processors and sensors make it easier for HMD manufacturers to bring their headsets to the world, while new features like position and hand tracking open up possibilities for developers and users. As UploadVR’s Anshel Sag observed yesterday, the interesting thing about the mobile industry is that “they can iterate so much more quickly than their PC counterparts that we are seeing mobile HMD feature sets leapfrog PC.”

We believe that technology works best when it’s both invisible and ubiquitous. It should feel like part of the world around you – a world that you can reach out and touch. To make this possible, we’re building the fundamental platform that developers, game studios, and OEMs need to make these new worlds truly interactive. All embedded directly within the headset, so it feels like magic.


Technology works best when it’s both invisible and ubiquitous #FreeYourHands #MobileVR
Click To Tweet


From the Mobile VR Platform that can be embedded into any headset, to our Interaction Engine that makes virtual worlds react in human ways, we’re continuing to push the boundaries of what’s possible in VR/AR. This week’s announcement is just a taste of what we have in development, and we can’t wait to share it with you.

To experience this technology first-hand, join us next week at GDC in San Francisco (booth #2024 in the Moscone Center South Hall) or at Mobile World Congress (MWC) in Barcelona (Qualcomm booth #3E10, Hall 3, Fira Gran Via). You can also follow us on Twitter @LeapMotion or subscribe to our newsletter for the latest mobile VR news.

The post Charting the Course for Mobile VR appeared first on Leap Motion Blog.

Discovering the Secrets of the Universe with VR + Embodied Learning

$
0
0

From the moment we’re birthed, we begin to move. We move by virtue of our legs, our bodies, the motion of our arms and hands, the intricate articulation of all of our fingers. Each of these movements correlates with synaptic connections in the brain and provides the rich landscape required for exponential neuronal growth. We as a species are constantly engaged in the process of using our bodies to interface with this world.

Why then in school, beginning in our first years of elementary school, do we allow this integral part of our development to get shut down? Why don’t we leverage the full potential of the human body to engage in discovering, experiencing, and embodying knowledge?

What does moving and gesturing with ourselves and the environment and with other people do for the learning process? This is a fascinating question. What we’ve seen in the entertainment industry, since the advent of the Wii, is a lot of new breakthrough motion technologies that interface with the movements of the body and engage the player at a whole other level of interaction.

This potential is what lies behind our recent work with GeoMoto and our vision for educational VR. As developers, we must ask how we can leverage motion interactions to actually understand the world around us in a more profound, more immediate, more personal, and more embodied way.

Learning through motion

At GameDesk, we’ve embarked on a variety of different experiments in kinesthetic learning. We’ve even experimented with low-tech interactions where students scatter into an open field and become cosmic dust, simulating how elements in the cosmos accrete and form object in space and eventual planets.

Accretion is a difficult concept to get your head around. Space-time from a human perspective is very slow; however, simulations allow us to speed up time in a way that the mind can truly capture and attune to what is actually happening (as best as we can simulate it). When the body is actually moving within the simulation, you become a piece of the story, and are playing out a larger narrative within this scientific concept.

In this scenario, students aren’t just attuning to the concept through their own motion, but their own motion in relation to other kids moving in the space. They’re having a collective experience because the accretion of particles is a collective experience. All those particles in space are all interacting in a symphony of motions and interactions. As people, we can do the same.

girlbird

Education and motion technology

When it comes to using motion technologies, we’ve worked with and developed a lot of systems. We even designed a mechanical wing apparatus that actuates full bird wing rotation and flapping. This system helps the learner embody through motion what it feels like to become a bird.

Imagine I was to say to you, “Hey, a bird flies by working with all the different forces that act on its body, and those forces are lift, drag, thrust, and gravity. As the bird rotates its wings up and down it increases lift or decreases lift by interacting with air molecules around it.” I’m sitting here and explaining this all to you, but you’re not experiencing it. In a game where we leverage embodiment and motion, and feel what’s like to feel a bird, a student is able to stretch out his arms and feel like a bird. They feel the air molecules change when they rotate their arms, and they will recognize the cause and effect relationship to their movement and the molecular environment around them.

When you combine simulation, gameplay, and movement connected to really core concepts, kids get it really fast. Way faster then looking at a whiteboard or seeing a static photo trying to represent a three dimensional concept. They are in it. They are experiencing it. We see kids within 15 minutes fully articulating an experiential knowledge of forces in motion and bird flight.

Geoscience_tectonics-full

Hand controls and plate tectonics

When we decided to work with the Leap Motion Controller, we really wanted to see how hand gestures can help kids understand plate tectonics. Lots of plate tectonic phenomena relate to movement – plates move towards each other, they move apart from each other, they move across from each other. With that in mind, we created GeoMoto.

Geomoto was a real proof of concept to see how quickly can kids remember all these geological concepts through the movement of their hands. What we found was with that experience (and the earlier planetary accretion experiment), that student improvements from pre to post on these tests ranged from an average of 5% all the way up to an average of 25%. For a more detailed breakdown of the improvements, check out our whitepaper Learning Geoscience Concepts through Play & Kinesthetic Tracking.

ContinentalDriftRedGirl2-e1447203931331-680x400

Where is all this heading?

I believe the blending of the emerging VR space with motion technologies is going to be a big one. The VR ship is coming in. Devices are getting pushed out. The experience of VR will be normalized in the next few years. A mixture between motion technologies and VR experiences are going to allow people to completely immerse themselves in complex concepts.

One of the things I really want to tackle is taking the hard sciences and making them intuitive and understandable through these technologies. When you get to 5th and 6th grade, math starts getting abstract, and science starts getting hard to see. This is a critical time, and this is a time when you start losing kids. These emerging technologies are going to be critical for kids that are visually oriented and kinesthetically oriented.

For example, how can we get them into the microverse, inside the roots of a plant? One may argue that there is no reason to work with a virtual plant when you have a real plant in front of you. However, with VR and motion technologies, we can get inside of the physical plant at levels of magnification and time dilations that would be impossible with a physical plant.

The real question to ask becomes, “what can the real or traditional structure not do?” Wherever that gap exists, that’s where these approaches step in. I envision a world where there is nothing you can’t see, visualize, move through, and physically experience through virtual motion-based technologies in the realm of education and information. A massive digital library of immersive and experiential modules that allow the learner to explore complex concepts. It’s the holodeck learning academy delivered in a way that even Star Trek never envisioned.

The post Discovering the Secrets of the Universe with VR + Embodied Learning appeared first on Leap Motion Blog.

Space and Perspective in VR

$
0
0

Creating a sense of space is one of the most powerful tools in a VR developer’s arsenal. In our Exploration on World Design, we looked at how to create moods and experiences through imaginary environments. In this Exploration, we’ll cover some key spatial relationships in VR, and how you can build on human expectations to create a sense of depth and distance.

Controller Position and Rotation


Space and perspective are among the most powerful tools in a VR developer's arsenal.
Click To Tweet


To bring Leap Motion tracking into a VR experience, you’ll need a virtual controller within the scene attached to your VR headset. Our Unity Core Assets and the Leap Motion Unreal Engine 4 plugin both handle position and scale out-of-the-box for the Oculus Rift and HTC Vive.

For a virtual reality project, the virtual controller should be placed between the “eyes” of the user within the scene, which are represented by a pair of cameras. From there, a further offset in Z space is necessary, to compensate for the fact that the controller is mounted on the outside of the headset.

This offset varies depending on the headset design. For the Oculus Rift, this offset is 80mm, while on the HTC Vive it’s closer to 90mm. Our Mobile VR Platform is designed to be embedded directly within headsets, and can be even closer to the user’s eyes.

Leap Motion Mobile VR Platform

Depth Cues


How to use the same 3D cinematic tricks as filmmakers to create depth and perspective in #VR
Click To Tweet


Whether on a standard monitor or in a VR headset, the depth of nearby objects can be difficult to judge. This is because in the real world your eyes dynamically assess the depth of nearby objects – flexing and changing their lenses, depending on how near or far the object is in space. With headsets like the Oculus Rift and HTC Vive, the user’s eye lenses remain focused at infinity.

To create and reinforce a sense of depth, you can use the same 3D cinematic tricks as filmmakers:

  • objects in the distance lose contrast
  • distant objects appear fuzzy and blue/gray (or transparent)
  • nearby objects appear sharp and full color/contrast
  • shadow from hand casts onto objects, especially drop-shadows
  • reflection on hand from objects
  • sound to create a sense of depth

Rendering Distance

While the distance at which objects should be rendered for extended viewing will depend on the optics of the VR headset being used, it typically lies significantly beyond comfortable reaching distance. Elements that don’t require extended viewing, like quick menus, may be rendered closer without causing eyestrain. You can also play with making interactive objects appear within reach, or responsive to reach at a distance (e.g. see Projective Interaction Mode in the User Interface Design Exploration.)

Virtual Safety GogglesThe goggles! They do nothing!

As human beings, we’ve evolved very strong fear responses to protect ourselves from objects flying at our eyes. Along with rendering interactive objects no closer than the minimum recommended distance, you should ensure that objects never get too close to the viewer’s eyes. The effect is to create a shield that pushes all moveable objects away from the user.

Multiple Frames Of Reference

Within any game engine, interactive elements are typically stationary with respect to some frame of reference. Depending on the context, you have many different options:

World frame of reference. The object is stationary in the world. This is often the best way to position objects, because they exist independently from where you’re seeing it, or which direction you’re looking. This allows an object’s physical interactions, including its velocity and acceleration, to be computed much more easily.

Body frame of reference. The object moves with the user, but does not follow head or hands. It’s important to note that head tracking moves the user’s head, but not the user’s virtual body. For this reason, an object mapped to the body frame of reference will not follow the head movement, and may disappear from the field of view when the user turns around.

Head frame of reference. The object maintains position in the user’s field of view. (For example, imagine a classic HUD.)

Hand frame of reference. The object is held in the user’s hand.

Objects in Blocks pass from the hand frame of reference, into the world.

Keep these frames of reference in mind when planning how the scene, different interactive elements, and the user’s hands all relate to each other.

Parallax, Lighting, and Texture

Real-world perceptual cues are always useful in helping the user orientate and navigate their environment. Lighting, shadows, texture, parallax (the way objects appear to move in relation to each other when the user moves), and other visual features are crucial in conveying depth and space.

Of course, you also have to balance this against performance! Always experiment with less resource-intensive solutions, and be willing to sacrifice elements that you love if it proves necessary. This is especially true if you’re aiming for a mobile VR release down the road.

So far we’ve focused a lot on the worlds that our users will inhabit and how they relate to each other. But what about how they move through those worlds? Next week: Locomotion.

The post Space and Perspective in VR appeared first on Leap Motion Blog.

6 Hidden Gems from our Dev Gallery

$
0
0

With such a strong global community, we see a stream of creators building and sharing new content all the time. This week, we wanted to showcase some community projects that you may not have seen yet.

Betatron

Betatron
Betatron is a 3D immersive puzzle game played by moving beta particles through the environment. Through the use of various devices like lasers, shrink rays, and additional particle systems, each puzzle gets more and more complex. With 360 levels to play, can you beat them all?

Requires: Mac or Windows, Oculus Rift
From: United States

Archena Ancient Baths

ArchenaBaths
Explore a beautiful and detailed recreation of an ancient Roman bath. Archena allows the user to enter a fully interactive 3D environment, where you can reach out and touch the water or explore the surrounding passageways.

Requires: Windows, Oculus Rift
From: Spain

3D Night Vision

3DNightvision2
Throw out your old-school blue/red 3D glasses with 3D Night Vision. The 3D effect is made possible by applying separate color filters to each of the camera inputs and the Image API.

Requires: Windows
From: Netherlands

Fairy Viewer

FairyViewer
Fairies live all around us, but only in this mixed-reality experience will you be able to see them. If you’re lucky, a fairy will land on the palm of your hand.

Requires: Mac or Windows, Oculus Rift
From: United States

TOUCH with WebGL & Leap Motion

Touchgif
In a meditative experience WebGL, ThreeJS, and a post-processing tool are used to simulate the physical effects of interacting with falling sand.

Requires: Web browser
From: United States

WrenAR VR/UI (on the moon)

WrenAR
WrenAR is a playful VR UI demo with a lunar theme. Throw orbs, play with rays, and type on a VR keyboard as you float across the dark side of the moon.

Requires: Windows, Oculus Rift
From: United States

Have a project you’ve been working on? Submit it here or send an email to developers@leapmotion.com and let us know more about it.

The post 6 Hidden Gems from our Dev Gallery appeared first on Leap Motion Blog.

Sunsetting the Leap Motion App Store

$
0
0

With hands in mobile VR just around the corner, we’ve been working hard to redefine our core user experience. We’ve also needed to make some clearcut decisions about how people access Leap Motion content.

We launched the Leap Motion App Store in 2013 because it represented a single destination for everyone who wanted to access the latest and greatest content. This made sense at the time because the Leap Motion Controller was a standalone product. Get a controller, download the software, find apps.

But with mobile VR, we’re set to become a fundamental component of a larger experience. OEMs who embed our technology have their own visions for app distribution. At the same time, online distribution platforms like Steam have a large (and growing) collection of VR apps. And in the last two years, the Leap Motion Gallery has expanded as a place to find desktop and VR content. For these reasons, an app store no longer makes sense.

With that in mind, we’ve decided to retire the App Store on June 30, 2017 – and to expand the Leap Motion Gallery as a central showcase for Leap Motion content. We will automatically issue App Store credit refunds to everyone who has used their Leap Motion wallet since January 1, 2017.

While we’re excited for the era of truly compelling mobile VR, today’s decision wasn’t made lightly. The FAQ below should cover most questions you may have. We’re also hosting a Q&A on our community forum.

Questions from Users

What happens to my App Home apps?

When the Store closes at 12pm PT on June 30th, you will no longer be able to download new apps. The apps that you already have in App Home will continue to be available indefinitely, but cannot be moved to a new computer.

Can I get a refund on my App Store credits?

Anyone who purchased credits on or after 12am PT on January 1, 2017 will have their credits automatically refunded to the payment method they used. Anyone who purchased credits before April 2017 and would like a refund should contact us directly at support@leapmotion.com (as their payment method may have expired).

Will you still support the Leap Motion Controller?

Yes. We have no plans to end support for the Leap Motion Controller. The magic of our technology has always been in its software – and even after nearly four years, we’re still exploring the hidden depths of the hardware.

Questions from Developers

What happens to my App Store apps?

When the Store closes on June 30th, your apps will no longer be available to download from the Leap Motion App Store. Users will continue to have access through local installs of App Home, but will not be able to receive updates.

When will final app payments be processed?

We will send final payments on all outstanding app balances on June 30.

Can I publish elsewhere?

Absolutely! We strongly encourage you to publish on your distribution platform of choice (such as Steam, itch.io, or Green Man Gaming). Leap Motion will also provide promotional support, including featured spots on the Leap Motion Gallery and our newsletter. Email community@leapmotion.com for details.

Did we miss any? Let us know in the comments!

The post Sunsetting the Leap Motion App Store appeared first on Leap Motion Blog.

Interaction Engine 1.0: Object Interactions, UI Toolkit, Handheld Controller Support, and More

$
0
0

As humans, we are spatial, physical thinkers. From birth we grow to understand the objects around us by the rules that govern how they move, and how we move them. These rules are so fundamental that we design our digital realities to reflect human expectations about how things work in the real world.

At Leap Motion, our mission is to empower people to interact seamlessly with the digital landscape. This starts with tracking hands and fingers with such speed and precision that the barrier between the digital and physical worlds begins to blur. But hand tracking alone isn’t enough to capture human intention. In the digital world there are no physical constraints. We make the rules. So we asked ourselves: How should virtual objects feel and behave?

We’ve thought deeply about this question, and in the process we’ve created new paradigms for digital-physical interaction. Last year, we released an early access beta of the Leap Motion Interaction Engine, a layer that exists between the Unity game engine and real-world hand physics. Since then, we’ve worked hard to make the Interaction Engine simpler to use – tuning how interactions feel and behave, and creating new tools to make it performant on mobile processors.

Today, we’re excited to release a major upgrade to this tool kit. It contains an update to the engine’s fundamental physics functionality and makes it easy to create the physical user experiences that work best in VR. Because we see the power in extending VR and AR interaction across both hands and tools, we’ve also made it work seamlessly with hands and PC handheld controllers. We’ve heard from many developers about the challenge of supporting multiple inputs, so this feature makes it easier to support hand tracking alongside the Oculus Touch or Vive controllers.

Let’s take a deeper look at some of the new features and functions in the Interaction Engine.

Contact, Grasp, Hover

The fundamental purpose of the Interaction Engine is to handle interactions with digital objects. Some of these are straightforward, others more complex. For example, consider:

Contact: what happens when a user passes their hand through an object?
Grasping: what does it mean to naturally grab and release a virtual object?
Hover: how can I be sure that the object that I’m contacting is what I actually want to interact with?

We want users to have consistent experiences in these cases across applications. And we want you as a developer to be able to focus on the content and experience, rather than getting lost in the weeds creating grabbing heuristics.

Physical User Interfaces

Users anticipate interacting physically with both objects and interfaces. That’s why we’ve built a powerful user interface module into the Interaction Engine so developers can customize and create reliable interfaces that are a delight to use. These are physically inspired, allowing users to understand the system on their first touch.

Widgets and Wearable Interfaces

In addition to traditional user interfaces, we’ve added support for more forward-looking user interfaces like wearables and widgets. For example, you now have the ability to create an interface that is worn on the hand, but expands into freestanding palettes as the user grabs an element off the hand and places it in the world.

Graphic Renderer

Alongside the Interaction Engine, we’re also releasing a beta version of an advanced new tool – the Graphic Renderer. As we push the boundaries of VR hardware, software, and design, we often develop internal tools that would be of great use to the broader VR community.

In building the Interaction Engine, we found it was important to render and interact with curved spaces for human-oriented user interfaces, and wanted to do it in a way that was performant even in very constrained environments. So we created the Graphic Renderer, a general-purpose tool that can curve an entire user interface with ease, and render it all in a single draw call. Designed to unlock new levels of performance in the upcoming generation of mobile and all-in-one headsets, it tightly pairs with the Interaction Engine to bring curved spaces to VR interaction.

Scene built with the Graphic Renderer and rendered in a single draw call.

We see the Interaction Engine and Graphic Renderer as fundamental tools for all of VR that enable you to create natural and compelling interactions in a reliable and performant package. Now it’s in your hands – we can’t wait to see what you build with it.

The post Interaction Engine 1.0: Object Interactions, UI Toolkit, Handheld Controller Support, and More appeared first on Leap Motion Blog.

Meet the Winners of 3D Jam 2.0

$
0
0

The votes are in! Based on community ratings and scores from the Leap Motion team, we’re excited to present the winners of the second annual 3D Jam.

This year’s 3D Jam raised the bar with over $75,000 in cash and prizes, and we’re impressed with the imagination and creativity of everyone who participated. With nearly 2,000 developers registered from over 90 countries, we saw close to 200 final submissions.

Our international 3D Jam Tour swept us to more than 20 cities in 6 countries – and with exciting updates in store for 2016, we can’t wait to see you all again! And now, here are this year’s winners:

3Djam2015

AR/VR Track Winners

1st Place: Lyra VR

lyra

Lyra is an enormously powerful virtual playground that lets you create music in 3D space. Chain chords, melodies, and instruments together in complex webs, then watch and listen to it unfold.

Prize: $10,000, Unity Suite, 2 OSVR HDKs, NVIDIA GeForce GTX 980 Ti

2nd Place: Warlock VR

warlock

Throw spells across the room, shoot plasma balls, and harness cosmic power with Warlock VR – now with a multiplayer mode so you can battle your friends!

Prize: $7,500, Unity Pro, OSVR HDK, NVIDIA GeForce GTX 980 Ti

3rd Place: RPS Island

rps

Ever imagined that rock-paper-scissors could save your life? In RPS Island, you must defeat a never-ending onslaught of enemies by signalling their weakness.

Prize: $5,000, Unity Pro, OSVR HDK, NVIDIA GeForce GTX 980 Ti

4th Place: Vox Rocks Dino Destroyer

vox

Vox Rocks: Dino Destroyer is a shooting-gallery-slash-puzzle game. Reach out and blast apart voxel dinos by manipulating magnetic forces and coordinating colors to clear each level.

Prize: $2,500, Unity Pro, OSVR HDK

5th Place: HellVibe

hellvibe

Solve HellVibe’s demonic puzzle box to set yourself free from an astral prison. Inspired by Hellraiser and Myst, you’ll need all your wit and skill to escape.

Prize: $1,000, Unity Pro, OSVR HDK

Community Favorite #1: PotelRVR, Pottery Maker

potelrvr

Grab a seat, relax, enjoy the sounds of nature, and create pottery in VR with PotelRVR.

Prize: $500, Unity Pro

Community Favorite #2: Virtual Real Meeting

meeting-vr

Making real-world distances meaningless is one of the classic visions of VR. Virtual Real Meeting is a collaborative virtual environment that makes it easy to connect and present slideshows, videos, drawings, task lists, and charts.

Prize: $500, Unity Pro

Open Track Winners

1st Place: Spiders of Mars

spiiiiiders

Fire up your space lasers and shoot down deadly bugs in Spiders of Mars – an endless shooter where you square off against an never-ending horde of robot spiders.

Prize: $10,000, Unity Suite, NVIDIA GeForce GTX 980 Ti

2nd Place: Zombies Shall Not Pass!

zombies

Ever wanted to be on the other side of the zombie apocalypse? Groan, thrash, and devour your way through the last survivors of the human race in Zombies Shall Not Pass!

Prize: $7,500, Unity Pro, NVIDIA GeForce GTX 980 Ti

3rd Place: Hand Capture

hand-capture

Motion control is an incredibly powerful tool for 3D animation. With Hand Capture, a new motion capture and animation plugin for Autodesk MotionBuilder 2016, you can bring just about anything to life.

Prize: $5,000, Unity Pro, NVIDIA GeForce GTX 980 Ti

Community Favorite: Universal Accessibility Vehicle (UAV)

accessibility

Take control of your smart home with a smart wheelchair! The Universal Accessibility Vehicle (formerly known as LEAPing into Accessibility) is an experimental wheelchair interface that lets you navigate and control household appliances.

Prize: $500, Unity Pro

Many thanks to our friends at:

sponsors

The competition might be over, but the real fun is just getting started! Many developers are working to submit their projects to the Developer Gallery, while others are already working on their next giant leap. Watch for further updates as we bring you new assets, resources, demos, and much more.

The post Meet the Winners of 3D Jam 2.0 appeared first on Leap Motion Blog.


How Sound Design Can Add Texture To A Virtual World

$
0
0

Explorations in VR Design is a journey through the bleeding edge of VR design – from architecting a space and designing groundbreaking interactions to making users feel powerful.

Sound is essential for truly immersive VR. It conveys depth and emotion, builds and reinforces interactions, and guides users through alien landscapes. Combined with hand tracking and visual feedback, sound even has the power to create the illusion of tactile sensation.

In this Exploration, we’ll explore the fundamentals of VR sound design, plus take a deep dive into the auditory world of Blocks. Along the way, we’ll break a few laws of physics and uncover the surprising complexity of physical sound effects.

What Can Great Sound Design Achieve in VR?

Presence and Realism in 3D Space


Sound in #VR is more than just an immersive tool – it alters how you understand reality.
Click To Tweet


When it comes to depth cues, stereoscopic vision is a massive improvement on traditional monitors. But it’s not perfect. For this reason, sound is more than just an immersive tool – how (and where) objects around you sound has an enormous effect on your understanding of where they are, especially when you’re not looking at them. This applies to everything from background noises to user interfaces.

Engines like Unity and Unreal are constantly getting better at representing sound effects in 3D space – with binaural audio, better reverb modeling, better occlusion and obstruction modeling, and more. The more realistic that zombie right behind you sounds, the more your hair stands on end.

Mood and Atmosphere

Music plays a crucial role in setting the mood for an experience. Blocks has a techno vibe with a deep bass, inspired by ambient artists like Ryuichi Sakamoto:


Weightless
features soft piano tracks that feel elegant and contemplative:

Finally, Land’s End combines a dreamy and surreal quality with hard edges like tape saturation and vinyl noise.

If you imagine shuffling the soundtracks in these three examples, you can understand how it would fundamentally change the experience.

Building and Reinforcing Interactions

Sound communicates the inception, success, failure, and overall nature of interactions and game physics, especially when the user’s eyes are drawn elsewhere. Blocks, for example, is designed with a wide range of sounds – from the high and low electronic notes that signal the block creation interactions, to the echoes of blocks bashing against the floor.

For game developers, this is also a double-edged sword that relies on careful timing, as even being off by a quarter second can disrupt the experience.

Tutorial Audio

It’s sad but true – most users don’t read instructions. Fortunately, while written instructions have to compete with a huge variety of visual stimuli, you have a lot more control over what your user hears.


PSA for #VR developers – users don't read instructions!
Click To Tweet


Using the abstract state capabilities in Unity’s Mecanim system, you can easily build a flow system so that your audio cues are responsive to what’s actually happening. Just make sure that the cues work within the narrative and don’t become repetitive.

Setting Boundaries

Virtual reality is an exciting medium, but for first time users it can take a few minutes to master its limitations. Our hand tracking technology can only track what it can see, so you may want to design interaction sounds that fade out as users approach the edge of the tracking field of view.

Evoking Touch

In the absence of touch feedback, visual and auditory feedback can fill the cognitive gap and reinforce which elements of a scene are interactive, and what happens when the user “touches” them. This is because our brains “continuously bind information obtained through many sensory channels to form solid percepts of objects and events.” Some users even describe phantom sensations in VR, which are almost always associated with compelling sound design. To achieve this level of immersion, sounds must be perfectly timed and feel like they fit with the user’s actions.

Sound Design in Blocks

We’ve already talked about its ambient-inspired soundtrack, but you might be surprised to learn the sound effects in Blocks were one of our biggest development challenges – second only to the physical object interactions, an early prototype of the Leap Motion Interaction Engine.

Magic and Progression

One of the core design ideas behind Blocks was that we never imply a specific device underneath anything. For example, there are no whirring or mechanical noises when the arm HUD appears. It’s just something that magically appears from nowhere. The block creation sounds are also minimal, suggesting a natural progression. This was central to the narrative we wanted to tell – the miraculous power to create things with your bare hands.


The sound design of Blocks was minimal to make block creation feel magical.
Click To Tweet


This philosophy was also reflected in the physical sound effects, which were designed to suggest the embodiment of the object itself, rather than a specific material. When you grab something, a minimal subtle clicking sound plays. Nothing fancy – just tactile, quick, and precisely timed.

Getting the Right Physical Sound Effects

Here’s where it got challenging. To ensure a natural and immersive experience, the physical sound of block impacts is driven by 33 distinct effects, which are modified by factors like block sizes, collision velocities, and some random variations that give each block its own unique character. This aspect of the design proved nontrivial, but also was a fundamental component of the final product.

Since the blocks don’t have a representative material (such as metal or glass), finding the right sound took time. In creating the Blocks audioscape, sound designer Jack Menhorn experimented with kitty litter, plastic jugs, cardboard boxes, and other household objects. The final sound suite was created by putting synths into cardboard boxes and slamming the boxes into each other.

Violating the Laws of Physics

In an abstract environment with simple geometry, sound design is the difference between disbelief and physical presence. Sometimes this involves breaking the laws of physics. When you have to decide between being accurate and delighting your user, always opt for the latter.


Sound design is the difference between disbelief and physical presence.
Click To Tweet


In the real world, sound follows the inverse square law – getting quieter as the source gets farther away. The Unity game engine tries to reinforce this real-world falloff. But a block that lands silently after being thrown a long distance isn’t very satisfying. With Blocks, we created a normal falloff for a number of meters, but then the falloff itself stops. Beyond that point, blocks sound to be the same volume, regardless of how far away they are.

At the same time, the reverb goes up as blocks get farther away – creating an echo effect. In the real world, this would be impossible, since there are no walls or anything in the space that suggests there should be reverb. This is all just part of setting the rules for the virtual worlds in ways that feel human, even as they violate the laws of physics. So far no one has complained, except maybe this guy:

The Future of VR Sound Design


The biggest challenge on the horizon for #VR sound design is the economy of scale.
Click To Tweet


Imagine all the different ways that you can interact with a coffee mug, and how each action is reflected in the sound it makes. Pick it up. Slide across the table. Tip it over. Place it down gently, or slam it onto the table. All of these actions create different sounds. Of course, if it breaks, that’s a whole other problem space with different pieces!

This is the root of the biggest challenge on the horizon for sound design in VR – the economy of scale. When you move away from a simple scene with a few objects, to fully realized scenes with many different objects, everything in the scene has to be interactive, and that includes sound. You need to have variations and sensitivity.

This is one of the reasons why we recommend only having a few objects in a scene, and making the interactions for those objects as powerful as possible. As VR experiences grow in size and complexity, these new realities will need richer soundscapes than ever before.

In the absence of an actual Holodeck, moving through virtual environments is a real challenge in VR design. Next time on the Leap Motion blog, some common techniques for locomotion in VR.

This article was originally published on UploadVR.

The post How Sound Design Can Add Texture To A Virtual World appeared first on Leap Motion Blog.

Design Sprints at Leap Motion: Building a Sculpture Prototype with the Graphic Renderer

$
0
0

With the next generation of mobile VR/AR experiences on the horizon, our team is constantly pushing the boundaries of our VR UX developer toolkit. Recently we created a quick VR sculpture prototype that combines the latest and greatest of these tools.


Learn how to optimize your #VR project for the next generation of mobile VR experiences.
Click To Tweet


The Leap Motion Interaction Engine lets developers give their virtual objects the ability to be picked up, thrown, nudged, swatted, smooshed, or poked. With our new Graphic Renderer Module, you also have access to weapons-grade performance optimizations for power-hungry desktop VR and power-ravenous mobile VR.

In this post, we’ll walk through a small project built using these tools. This will provide a technical and workflow overview as one example of what’s possible – plus some VR UX design exploration and performance optimizations along the way. For more technical details and tutorials, check out our documentation.

Rapid Prototyping and Development at Leap Motion

In the early life of our tools, we take them through a series of shakedown cruises identifying micro projects where we attempt to use those tools. In the spirit of the ice-cream principle, the fastest way to evaluate, stress-test, and inform the feature set of our tools is to taste them ourselves – to build something with them. This philosophy informs everything we do, from VR design sprints to our internal hackathons.


A scene from last month’s Leap Motion internal hackathon. Teams played with concepts that could be augmented with hands in VR – checkers in outer space, becoming a baseball pitcher, or a strange mashup between kittens and zombies. As a result, we learned a lot about how developers build with our tools.


What's the fastest way to stress-test our tools? Build something!
Click To Tweet


Picking something appropriate for this type of project is a constraints-based challenge. You have to ask what would:
  • Be extremely fast to stand up as a prototype and give instant insights?
  • Give the development team the richest batch of feedback?
  • Stress test the tool well past it performance limits?
  • Unearth the most bugs by finding as many edge cases as possible?
  • Reveal features not anticipated in the initial development?
  • Can be highly flexible and accommodating to rapidly shifting needs?
  • Continue to explore the use of hands for VR interaction?
  • Be ready to fail?
  • Make something interesting? (Extra credit!)

This long list of constraints makes for a fascinating and fun (really!) problem space when picking something to make. Sure, it’s challenging, but if you satisfy those criteria, then you have a valuable nugget.

A Living Sculpture

While working on the game Spore, Will Wright spoke about how creative constraints can help us find a maximum “possibility space.” That concept has stuck with me over the years. The deliberately open-ended idea of making a sculpture provides the barest of organizing principles. From there we could begin to define what that sculpture might be by making building blocks from features that need to be tested.

Optimization #1 – Use the Graphic Renderer. The first tool we’ll look at is the Graphic Renderer, which optimizes the runtime performance of large groups of objects by dynamically batching them to reduce draw calls. This can happen even when they have different values for their individual features.

With that in mind, we can create interesting individual objects by varying shape, color, size, morphing transformations, and glow qualities – all while leveraging and testing the module. From there, each object can be made interactive with the Interaction Engine and given custom crafted behaviors for reacting to your hands. Finally, by making and controlling an array of these objects, we have the basis for a richly reactive sculpture that satisfies all our rapid prototyping constraints.

Crafting Shapes to Leverage the Graphic Renderer

Our journey starts in Maya, where we take just a few minutes to polymodel an initial shape. Another constraint appears – the shape needs to have a low vertex count! This is because there will be many of them, and the Graphic Renderer’s dynamic batching limits are bounded by Unity’s dynamic batching limits.

This means that not only the number of vertices matters, but also the number of attributes on those vertices affect dynamic batching limites. So a set of vertices is counted several times, once for each of its attributes such as position, UV, normal, colour, etc. As well, we want our objects to have several attributes for more visual richness and fun.

As groundwork for a reactive morphing behavior, the shape gets copied and has its vertices moved around to create a morph target with an eye toward what would make an interesting transformation. Then the UVs for the shape are laid out so it can a have a texture map to control where it glows. This object is exported to Maya as an FBX file.

On to Unity! There we start by setting up a Leap Graphic Renderer by adding a LeapGraphicRenderer component to an empty transform. Now we begin adding our shapes as children of the LeapGraphicRenderer object. Typically, we would add these objects by simply dragging in our FBXs. But to create an object for the Graphic Renderer we start with an empty transform and add the LeapMeshGraphic component. This is where we assign the Mesh that came in with our FBX shape.

To see our first object, it needs to be added to a GrapicRenderer Group. A group can be created in the Inspector for the LeapGraphicRenderer component. Then that Group can be selected in the Inspector for our object – and our object will appear. For our group, we’re using the Graphic Renderer’s Dynamic rendering method, since we want the user’s hands to change the objects as they approach.

Now we begin to add Graphic Features to our sculpture’s render group. The Graphic Renderer supports a set of features that we’ve been using internally to build performant UIs, but these basic features can be used on most typical 3D objects. After these features are set up, they can be controlled by scripts to create reactive behaviors. As these features are added in the inspector for the Graphic Renderer, corresponding features will appear in the LeapMeshGraphic component for our object.

  • Graphic Feature: Texture – _GlowMask – For controlling where the object glows
  • Graphic Feature: Blend Shape – For adding a morph target using the FBX’s blend shape
  • Graphic Feature: Color Channel – _BackgroundColor – for the main color of the object
  • Graphic Feature: Color Channel – _GlowColor – for controlling the color of the glow, we use the Graphic Renderer’s custom channel that will be paired with a corresponding _GlowColor and _GlowMask in a custom variant of one of the shaders included with the Graphic Renderer Module

As these features are added to the GraphicRenderer Group, corresponding features appear in the LeapMeshGraphic for any objects attached to that Group.

After the initial setup of the first object, it can be duplicated to populate the sculpture’s object collection. This brings us to one of several performance optimizations.

Optimization #2 – Create an object pool for the Graphic Renderer. While it’s easier to populate the object array in script using Unity’s GameObject.Instantiate() method, this creates a dramatic slowdown in the app while all those objects are both spawned and added to the Graphic Renderer group. Instead, we create an object pool that is simply detached from the Graphic Renderer at the start, making them invisible.

Creating this sculpture helped to reveal the need for attaching and detaching objects.The Graphic Renderer’s detach() and TryAttach() methods can come in handy when showing and hiding UIs with many components.

Next, a shader is needed that can work with the Graphic Renderer and be customized to support the main color, glow color, and glow mask features that were added to the LeapMeshGraphic components of the sculpture objects. The Graphic Renderer ships with several shaders that work as starting points. In this case, we started with the included DynamicSurface shader and added the _GlowColor and _GlowMask properties and blend the color with the _BackGroundColor. The resulting DynamicSurfaceGlow.shader gets assigned in each Group in the GraphicRenderer.


Creative constraints in experience design help us find a maximum possibility space.
Click To Tweet


One current limitation of the Graphic Renderer is the total vertex and element counts that can be batched per render group Fortunately, this is easily handled by creating more groups in the Graphic Renderer component and giving them the same features and settings. Then you can select subgroups of 100 of the sculpture’s object pool and assign them to the new groups. While future versions of the Graphic Renderer won’t have this vertex-per-group-count limitation (partly because of what we found in this ice-cream project!), the Groups feature allows you to have different features sets and rendering techniques for different object collections.

Optimization #3 – Automatic texture atlassing. Another handy feature of the Graphic Renderer, Automatic Texture Atlassing creates a single texture map by combining the other individual textures in your scene. While this texture is larger, it allows all objects with the textures it includes to be rendered in a single pass (instead of a separate pass for objects using each texture). So for our sculpture, each object can have a unique glow mask made specifically for that object.

Automatic texture atlassing is set up by adding a texture channel to the Graphic Renderer group Inspector under /RenderingSettings/Atlas/ExtraTextures.Then, since it can take time to combine these textures into the atlas, a manual “Update Atlas” button is provided, which indicates when the atlas needs to be compiled. You won’t need to interact with the atlas and can work with your individual textures as in a typical workflow.

While the Graphic Renderer’s performance optimization power requires a different asset workflow and some setup, its rendering acceleration is invaluable in resource-tight VR projects. Closely managing the number of draw calls in your app is critical. As a simple measure, we can see in the Statistics panel of the Game Window that we’re saving a whopping 584 draw calls from batching in our scene. This translates to a significant rendering speed increase.

With our Sculpture’s objects set up with Graphic Renderer features, we have a rich foundation to explore some of Leap Motion’s Interaction Engine capabilities and to drive those those graphic features with hand motions.

The sculpture is deliberately made to have a dynamically changing number of objects, and show more objects than either the Graphic Renderer or Interaction Engine can handle. This is both to test the limits of these tools and to refine their workflows (and of course to continue to investigate how hands can affect a virtual object). In our next post, we’ll make the sculpture react to our hands in a variety of interesting ways.

The post Design Sprints at Leap Motion: Building a Sculpture Prototype with the Graphic Renderer appeared first on Leap Motion Blog.

Leap Motion Announces Keiichi Matsuda as VP of Design and Global Creative Director to Lead New London Design Research Studio

$
0
0

Today we’re excited to announce the opening of our new design research studio in London with visionary VR/AR filmmaker Keiichi Matsuda, who will lead the new office and assume the role of VP of Design and Global Creative Director.

With this new London office, we’re building on our team of world-class creatives and engineers to further our mission to design robust, believable and honest visions of a world elevated by technology, with human input at the center. Leap Motion’s efforts will seek to set a new standard for interaction design with the digital world, and define the core user experience for the next generation of mobile and all-in-one headsets.


Our vision is a world elevated by technology, with human input at the center.
Click To Tweet


“Virtual and augmented reality are at a critical point in their evolution,” said Michael Buckwald, Leap Motion CEO. “With the rapid adoption of VR/AR over the next few years within industries, and integration into how we live, work, and play — it is essential that we lay the groundwork for a magical user experience through a unified design philosophy.”

“We can’t predict what everyday life will look like in the future. What we do know is that technology will completely transform the world,” said Matsuda. “It is our responsibility to find a path through the dangers and challenges ahead, and construct a positive and inclusive vision of a world that we want to live in. I want to bring my experience in design and world-building to bring about this change.”

Matsuda is a celebrated filmmaker and designer whose research explores how emerging technology will impact future lives. His multi-disciplinary approach fuses video, interaction design, and architecture to create vibrant “hyper-real” environments where the distinctions between physical and virtual start to dissolve. Matsuda’s award-winning work includes provocative shorts, such as HYPER-REALITY and Augmented (Hyper) Reality: Domestic Robocop, and the upcoming short film Merger.

The post Leap Motion Announces Keiichi Matsuda as VP of Design and Global Creative Director to Lead New London Design Research Studio appeared first on Leap Motion Blog.

Design Sprints at Leap Motion: Crafting Reactive Behaviors with the Interaction Engine

$
0
0

Last time, we looked at how an interactive VR sculpture could be created with the Leap Motion Graphic Renderer as part of an experiment in interaction design. With the sculpture’s shapes rendering, we can now craft and code the layout and control of this 3D shape pool and the reactive behaviors of the individual objects.


The Leap Motion Interaction Engine provides the foundation for hand-centric VR interaction design.
Click To Tweet


By adding the Interaction Engine to our scene and InteractionBehavior components to each object, we have the basis for grasping, touching and other interactions. But for our VR sculpture, we can also use the Interaction Engine’s robust and performant awareness of hand proximity. With this foundation, we can experiment quickly with different reactions to hand presence, pinching, and touching specific objects. Let’s dive in!

Getting Started with the Interaction Engine

To get started, we create two main scripts for the sculpture:

  • SculptureLayout: One instance of this script in the scene acts as the sculpture manager, controlling the sculpture and exposing parameters to the user to create variations at runtime. It handles enabling and disabling objects as needed, as well as sending parameter changes to the group via events. 
  • SculptureInteraction: On each object, this script receives messages from the SculptureLayout script and communicates with the its LeapMeshGraphic and its Interaction Engine InteractionBehavior components. Using the callbacks provided by the Interaction Engine gives us a path towards our reactivity.

In the Unity scene hierarchy, each sculpture object needs to have a collection of components. Besides the LeapMeshGraphic, the InteractionBehavior component makes an object interactable with hands. In turn, it requires the presence of a Unity physics Rigidbody and a Collider component. Finally, each object receives our SculptureInteraction behavior script. This collection of components allows us to craft a reasonably sophisticated dynamic behavior for each object.

Left column: Scene Hierarchy. Middle column: Sculpture object setup. Right column: Sculpture parent setup

The sculpture’s parent transform holds the Graphic Renderer component and the SculptureLayout script. The sculpture’s object pool is parented to this transform, since Graphic Renderer objects must be parented under a Graphic Renderer component. For Leap Motion rigged hand models, hand tracking, and Interaction Engine setup, it’s quick and easy to begin with one of the scene examples from the Interaction Engine module folder.


Widget examples in the #LeapMotion Interaction Engine can easily be wired into #Unity events.
Click To Tweet


The Interaction Engine setup includes an InteractionManager component and InteractionHand assets. The InteractionHands are invisible hands that are constructed at runtime from colliders, rigidbodies, joints, and Leap ContactBone scripts. The InteractionManager provides awareness of interactive objects in our scene that have InteractionBehaviors attached.

Several of the example scenes include simple slider and button widget examples. These can be used to quickly experiment with parameters as they are exposed in code. The widgets have events that are exposed with standard Unity event Inspector UIs. This means you can wire the sliders up to methods and parameters in your scripts without any further coding! Then, when it’s time for the UI work, they also provide ready examples of how the widget hierarchies and components are set up.

VR Sculpture Layout and Control


Learn how to make a simple, highly interactive #VR sculpture demo with barely any code.
Click To Tweet


To perform one of its main purposes, the SculptureLayout script’s ResetLayout() method takes in a number of user-modified parameters – particularly latitude and longitude counts – and builds the sculpture from the object pool. Objects are enabled and disabled as needed (again by adding or removing from the render group) depending on these counts. Its simple layout algorithm loops through the object pool and assigns them a position based on a collection of functions.

The initial function was a spherical layout function. Tinkering with this function easily led to variants that form a bell, a twisting sheet, a mirrored funnel, and others.

 

The other purpose of the SculptureLayout script is to expose controls to the user so they can customize the sculpture at runtime. These are simply constructed as public parameters that trigger Unity events with their setter methods. This way it becomes easy to create UIs that only need to change these parameters for the sculpture to update.The SculptureInteraction scripts on the objects subscribe and unsubscribe to these events when they are enabled and disabled. They in turn handle these parameter changes only when they occur.

The SculptureLayout script includes parameters and events for updating a set of the sculpture’s features:

  • Latitude-longitude totals, which define how many objects are in the horizontal and vertical rows of the sphere and in the rows and columns of the other layouts
  • Scale of individual objects
  • Layout function
  • Shape mesh
  • Distortion, which applies a scale modifier based on the object’s height within the sculpture
  • Hover scale, which defines the size to which the objects will scale when a hand approaches
  • Hover aim weight, which defines how far the objects will rotate toward a hand when it approaches
  • Object base color
  • Object glow color

This set of features provides enough combinations to have fun playing with the sculpture at length without exhausting its possibilities. The SculptureLayout script then has methods for saving and loading specific settings for this list of parameters. This collection of presets can be created to instantly transform the sculpture and show interesting configurations.

SculptureInteraction: Using Interaction Engine Callbacks

Crafting the SculptureInteraction behavior class starts with using the callbacks exposed by the Interaction Engine and other methods from the Leap Motion Unity Modules’ bindings to the Leap Motion tracking data.The SculptureInteraction class keeps a reference to the specific InteractionBehavior and LeapMeshGraphic instances that live on its transform – allowing it to use information about the hand to control its visual features.

The Interaction Engine’s OnHover() callback is triggered when an InteractionHand comes within a globally set threshold of the specific InteractionBehavior. By subscribing to this callback, the SculptureInteraction class can run its behavior methods, consisting mostly of simple vector math, when a hand is near. First it calculates the distance from the palm to the object. Then it can use this distance to weight its other math functions:

  • Aim the object’s rotation at the hovering hand
  • Scale the object’s transform as the hand approaches
  • Blend the weight of the LeapMeshGraphic’s _BlendShape value to animate the shape’s morph target
  • Glow changes the weight of the LeapMeshGraphic’s custom _GlowColor value

Within the SculptureInteraction’s onHover() handler method, we can use the Leap Motion C# API’s public attribute of PinchStrength to detect whether the hand is also pinching. Then a little more math causes the object to move towards the pinch point. This creates a “taffy pull” behavior, allowing the sculpture to be stretched apart.

For a chain reaction pulse behavior, another script is created – the PulsePoseDetector.cs script, which sits on the InteractionHand transform. It uses the API method hand.finger[x].isExtended() to see if the index finger and thumb are extended while the others are not. This becomes a simple pose detector that we can use in combination with the Interaction Engine’s Primary Hover callback.

By first checking to see if our pose has been detected and held for some number of frames, we’ll know if the hand is ready. Then, using the Interaction Engine’s OnContactBegin() callback combined with PrimaryHover, we can call the public NewPulse() method in the touched object’s SculptureInteraction class when the posed index finger touches an object. This calls a Propagate method, which checks for its neighbors and in turn triggers their pulse propagation methods for our chain reaction.

The pulse itself consists of a coroutine which lerps the object’s color through a gradient that is stored in the Sculpture Interaction.This both gives us access to the pulse color but allows us to give the pulse a color change over its lifespan.

The Stress Test and Optimization Loop

As noted in our previous post, this sculpture is deliberately meant to find and exceed the limits of the Graphic Renderer and the Interaction Engine. This helps us zero in on performance bottlenecks and bugs, and to highlight possible use cases. Ramping up the number of visible, active shapes in the sculpture and running Unity’s Profiler provides a lot of rich information on which parts of the code are taking the most time per frame and whether we’re seeing garbage collection.


Leap Motion's Interaction Engine includes simple techniques for #mobileVR optimization.
Click To Tweet


Additionally, we can begin to identify best practices for using the tools and ways to optimize the project specific code of the sculpture. There are of course endless other optimizations for any VR project – whether desktop or mobile VR – but here are a few sculpture specific optimizations to illustrate some typical performance tuning.

For example, we checked “Ignore Grasping” for each of our individual InteractionBehaviors. Since we’re not grabbing the shapes in the sculpture, we can turn off the calculations for grab detection in all the shapes. (While the math that runs in each SculptureInteraction script on each shape is computationally trivial, we sometimes have hundreds of these shapes being hovered, so this can still add up to significant calculation time per frame.) So in the SculptureInteraction script we optimized by calculating the reactions to the hand every fourth frame. These results are then fed into continuous lerps (linear interpolators) for the position and rotation of the object. While the change to the behavior of the shapes as your hand approaches is nearly imperceptible, this provides yet another small performance optimization.

With these behavior examples, we’ve illustrated how a combination of the Leap Motion tracking API and our Unity Modules can enable many forms of interaction explorations with a few lines of code for each. For the next part of the VR sculpture project, we can begin exercising the Interaction Engine’s reactive widgets. At this point, many of the sculpture’s parameters are exposed in code, so they can be easily wired up to be controlled by Interaction Engine UIs. We’ll dive into creating these UIs to make a UX playground in the next post in our series.

The post Design Sprints at Leap Motion: Crafting Reactive Behaviors with the Interaction Engine appeared first on Leap Motion Blog.

Design Sprints at Leap Motion: A Playground of 3D User Interfaces

$
0
0

As mainstream VR/AR input continues to evolve – from the early days of gaze-only input to wand-style controllers and fully articulated hand tracking – so too are the virtual user interfaces we interact with. Slowly but surely we’re moving beyond flat UIs ported over from 2D screens and toward a future filled with spatial interface paradigms that take advantage of depth and volume.

Last week, Barrett Fox described his process in pushing the new Graphic Renderer and the Interaction Engine’s Hover callbacks to their limits by creating a kinetic sculpture with tons of tweakable input parameters. Today I’ll detail my exploration of several ways spatial UIs could be used to control aspects of that sculpture – or any piece of complex content – by creating a playful set of physical-like user interfaces.

From Flat Screens to VR Interfaces


We’re moving beyond flat UIs from 2D screens toward a future with new spatial interface paradigms.
Click To Tweet


When someone first puts on a Leap Motion-enabled VR headset, it often seems like they’re rediscovering how to use their own hands. In a sense, they are. When we bring our hands into a virtual space, we also bring a lifetime’s worth of physical biases with us. Compelling spatial interfaces complement and build upon these expectations.

With our Interaction Engine Unity package, prototyping these kinds of physically inspired interfaces is easier than ever. The module features prefabs for a range of common 2D UI components, made 3D and physical-like. Buttons that depress in Z space then spring back to their resting position. Sliders that can be activated by a fingertip tap on the side. Even examples of hand-based, wearable UIs and dynamic deployable UIs.

Concept: A Playset, Not A Control Board

When designing an interface, one of the highest priorities is usually efficiency. In this exploration, however, speed of task completion was further down the list. The core things I wanted to focus on when creating these UIs were:

  • Creating a sense of physicality
  • Conveying hand-to-UI-element proximity in 3D through reactivity/feedback
  • Making interface interactions which feel playful
A conceptual mood board featuring interfaces both simple and complex, with a focus on physicality and play. We also explored ideas around form, affordances, and use of color accents.

A conceptual mood board featuring interfaces both simple and complex, with a focus on physicality and play. We also explored ideas around form, affordances, and use of color accents.


#VR design pro-tip: human beings move in curves. Your interfaces should be curved as well!
Click To Tweet


Since this project was designed to run on mobile VR headsets, we designed knowing that it might be experienced with only 3 degree-of-freedom (3DoF) head tracking. This meant that the user would be seated and all UIs needed to be within arm’s reach.

Designing virtual interactions for human arms, hands, and fingers means digging into the range of motion of shoulders, elbows, and wrists. To begin blocking out the layout of the UI elements, I dove into Gravity Sketch. Using different colors for each joint, I marked out comfortable ranges of motion for moving my arms, pivoting at my shoulder, elbows, and wrists with both my arms extended, and then with my elbows at my sides.

Using Gravity Sketch to mark out ranges of comfortable motion, pivoting at my shoulders, elbows, and wrists.

Using Gravity Sketch to mark out ranges of comfortable motion, pivoting at my shoulders, elbows, and wrists.

Blocking out UI areas based on the above ranges.

This constraint – surrounding a seated user with UIs – also meant that I was able to test out another new utility, curved spaces. A feature that can be unlocked by combining the Graphic Renderer and the Interaction Engine, curved spaces allow entire user interfaces to be trivially warped into ergonomic curves around the user while everything still works as you intend.

Almost all of the UI components in this demo are rendered with the Graphic Renderer, making it easy to wrap them around the user by using a Leap Curved Space component.

Once we defined the overall design goals and layout schemes, it was time to design the user interfaces themselves.

Building a Button with the Interaction Engine

Since the iPhone introduced multi-touch input in 2007, we’ve seen 2D touchscreen interaction design evolve into a responsive, motion-filled language. Modern apps respond to any input with visual, audio, and sometimes even subtle haptic feedback. Taps and swipes are met with animated ripples and dynamic element resizing.


Without dynamic feedback, #VR interactions can feel unsatisfying and weird.
Click To Tweet


In VR, every interactive object should respond to any casual movement. Users don’t always know what to expect, and dynamic feedback helps to build a mental model of how the virtual world works, and what each action achieves. Without dynamic feedback, an interaction can feel unsatisfying and weird.

Beginning with the most fundamental of UI elements – a button – we asked what this sort of reactiveness might look like in VR with hands. While touchscreen button interactions are binary (contact vs. non-contact), pushing a button in 3D involves six distinct stages:

  • Approach. Your finger is near the button, which may start to glow or otherwise reflect proximity.
  • Contact. Your finger touches the button, which responds to the touch.
  • Depression. Your finger starts to push the button.
  • Engagement. Success! The button may change its visual state and/or make a sound.
  • Ending contact. Your finger leaves the button.
  • Recession. Your finger moves away.


#VR buttons have 6 stages of interaction. Each stage is at your fingertips w/ #LeapMotion #Unity.
Click To Tweet


Conveniently, feedback for all of these stages of interaction are provided by the Interaction Engine. You can simply attach an InteractionBehavior component to a RigidBody (in this case, the button) for easy and performant access to all kinds of information about the relationship between a GameObject and a user’s hands. Here are those interaction stages again, this time with their specific Interaction Engine callbacks:
  • Approach [HoverBegin]
  • Contact [ContactBegin]
  • Depression [ContactStay]
  • Engagement [OnPress]
  • Ending contact [ContactEnd]
  • Recession [HoverEnd]

When your hand approaches a button in this prototype, a white ring rises up from its base to meet the contact surface. As your finger gets closer, the ring gets closer, until contact is made and the ring reaches the top of the button.

Depressing the button until it engages changes the color of the button frame. Along with an audio click, this confirms the successful completion of the interaction. When contact between finger and button ends, a second slightly higher-pitched click marks the end of the interaction. The white ring recedes as the user moves their hand away.

The symbol is a static mesh. Behind that, an interaction button, sliding ring, and button frame.

These are actually toggles (the stubborn cousin of the button) which activate preset configurations of all of the Sculpture’s parameters at once.

A similar approach with an expanding white inner ring was used on the sliders.

Animation showing the different components of the sliders.

The Hierarchy setup in the Unity Editor for the Radius Slider, showing an InteractionSlider.cs sending events on HorizontalSlide, HoverBegin, and HoverEnd events.

The Hierarchy setup in the Unity Editor for the Radius Slider, showing an InteractionSlider.cs sending events on HorizontalSlide, HoverBegin, and HoverEnd events.

Physical-feeling signage-style text callouts were added to the sliders. These expand when hovered and retract once the user moves their hand away.

Before settling on this rising (or expanding) ring feedback, I also experimented with the base button mesh itself morphing as your finger approached. The idea here was to make the button convex at rest, affording the action of being pushed. As your finger approached, it would morph into a concave shape, as though acknowledging the shape of your finger and providing a matching socket. A kind of lock to key metaphor.

Inspired in part by the finger-key from The Fifth Element.

This style took full advantage of the 3D nature of the UI components and felt very interesting. However, having HoverDistance drive this button shape morphing didn’t communicate how close the finger was as effectively as the rising ring approach. I would love to delve deeper into this concept at some point perhaps by having your hand mesh also morph – turning your fingertip into a key shape.

Exploring 3D VR User Interfaces

Beyond adding spatial UI feedback for buttons and sliders, we also began scratching the surface of new UI possibilities afforded by the freedom of three dimensions. How could the reliable grab/release interactions and the soft hand-to-object contact enabled by the Interaction Engine allow for novel 3D user interfaces?

Recreating Real World 3D Manipulation


How can we bring common mechanical 3D inputs from the physical world into #VR?
Click To Tweet


I was curious to explore whether a virtual recreation of a common mechanical 3D input from the physical world would be compelling to interact with in VR using your hands.

Physical trackballs are a highly tactile input with a wide spectrum of interaction levels. They can be manipulated with a fingertip and dialed in with slow precision or – especially with larger trackballs used in interactive installations – can be spun with force like a basketball balancing on a fingertip or a Kugel Fountain. A trackball seemed like a prime candidate for virtual recreation.

Interacting with just the tip of a thumb on a small trackball, and a whole group spinning a Kugel Fountain.

To start, I attached a RigidBody component and an InteractionBehavior to a sphere, locked its XYZ position while allowing XYZ rotation to move freely, and let the Interaction Engine take care of the rest. Mapping the rotation of the sculpture 1:1 to the freeform manipulation of the trackball, combined with audio and visual cues driven by angular velocity, created quite a compelling interaction.

A trackball-style interface with three-color diffuse texture and a golf-ball-like normal map.

A trackball-style interface with three-color diffuse texture and a golf-ball-like normal map.

VR Color Picker

Color pickers often try to expose multiple dimensions of color – such as hue, saturation, and lightness, or red, blue, and green levels. There are many ways to visualize these color spaces in 2D. By exploring them with a mouse and keyboard, we can pick exact values plotted on 2D planes very easily, though it’s usually quite a deliberate interaction. What might a 3D color picker add to the user’s experience of selecting colors?

In our sculpture prototype, there are three colors the user can change: (1) the sculpture’s tint color, (2) the sculpture’s glow color, and (3) the skybox color. Rather than focusing on a UI that excels at fine-tuning color values, we explored a 3D color picker which allowed a user to change all three colors quickly with just the wave of a hand.

Each color is represented by a small cube in a larger frame, with red (X), green (Y), and blue (Z). Moving the cubes around (by pushing or grabbing) updates their RGB levels. This allows an exploration of additive RGB color space.

To visually convey as much information about the state of UI elements as possible, each color cube is connected by a line renderer to the object whose color variable it controls.

To visually convey as much information about the state of UI elements as possible, each color cube is connected by a line renderer to the object whose color variable it controls.

3D Drag and Drop

Drag and drop is one of the most metaphorically direct interactions a user can do on a 2D screen. Holding a mouse click or a fingertip tap to select an object, and then dragging it to a target location, feels natural and intuitive.

Fortunately, one of the core features of the Interaction Engine is the ability to realistically grab, drag, and drop a virtual object in 3D. As a result, the functional part of this piece of UI was again already mostly complete.

So that users could change the mesh shape that makes up the sculpture, we created a shelf which displays each mesh option in 3D, plus a protruding podium which shows the currently selected shape. To add more feedback, a cubic frame was added around each shape. As the user’s hand approaches, the frame extends until it forms a complete cage. This achieves two things: it tells the user how close they are to any given shape, and creates a uniform cubic volume that’s easy to grab.

Once you pick up a shape, the podium unlocks the current shape and moves it away to make room for the one in your hand. The grabbed shape and the podium also pulse with the current sculpture glow color, indicating a connection. Once the grabbed shape is moved near the podium, it automatically detaches from the hand, orients itself, and locks into the podium.

This playset of spatial user interfaces is just a glimpse of the possibilities afforded by the physics-based foundation of the Interaction Engine. What kinds of 3D UIs would you like to see, touch, and create with these tools? Let us know in the comments below.

An abridged version of this article was originally published on UploadVR. Mood board image credits: Iron Man 2, Endeavor Shuttle Cockpit, Fisher Price Vintage, Matrix Reloaded, Wirtgen Milling Machine, Fisher Price Laugh and Learn, Grooves by Szoraidez, Google Material Design Palette, sketches by Martin

The post Design Sprints at Leap Motion: A Playground of 3D User Interfaces appeared first on Leap Motion Blog.

Viewing all 481 articles
Browse latest View live