Quantcast
Channel: Leap Motion Blog
Viewing all 481 articles
Browse latest View live

Take Flight and Restore Balance to the Animal Kingdom with The Crow

$
0
0

Over the next several weeks, we’re spotlighting the top 20 3D Jam experiences chosen by the jury and community votes. These spotlights will focus on game design, interaction design, and the big ideas driving our community forward.

Created by Andrew Kostuik and Ed Wisniewski at NORCAT’s Immersive Learning Centre, The Crow made a big impression thanks to its beautiful aesthetic and rich open-world concept. You can download the 3D Jam alpha demo or support further development by purchasing the beta at thecrowgame.com.

thecrow1

As a game developer, which titles inspired you early on, and where do you draw inspiration from now?

We both grew up playing classic video games. Ed grew up in the arcade, frequently feeding the machine, playing games like Pac-Man, Space Invaders, Galaga, and Contra. I was a bit younger, never seeing the very early video game boom, but I had been exposed to Pong boards and early generation consoles.

I became obsessed with the abstract visual aspects, while Ed was my antithesis: he studied the logic and structure. Although we had never entirely focused on gaming directly, our experiences transitioned rather effortlessly when we met and began exploring games and simulation development. We both draw additional inspiration from our love for music. As I washed away into the sunset of the high school talent show, Ed performed shows around the world, including the MGM in Vegas.

What is the Immersive Learning Center, and how are you involved?

The formation of our new venture was a long time coming. Early in Ed’s career, he was a partner at a company that developed interactive applications and games for companies around the world. His company was acquired by NORCAT, which gave him the ability to see his ideas grow under the umbrella of a Technology Center. I’d been working in the advertising industry for 13 years, working at agencies and forming my own.

We finally met and fused our skill sets. Under NORCAT, we’ve expand into our newly formed Immersive Learning Center, where we will be focusing on the creation and production of interactive immersive, visual games, training programs, and services drawing on the talent of our entire team.

thecrow2

What was it like constructing the world of The Crow?

The initial world was planned to be akin to our Earth. It’s the start of a picture being painted of a world the Bird Gods once inhabited many Eons ago. The early character development focused on the same human-like interactions, but that of another perspective. We are still very early into development and will continue to expand, evolve, and reintegrate into our foundation.

During the one-month 3D Jam timeline, I spent the majority of time riffing with ideas and building location assets, until much was refined and we found that we had a start for an expansive world. We can only hope to find a community who wishes to help shape the future of Diorion.

It’s the task of the players to restore balance in the land of Diorin. What’s the origin of this narrative?

The narrative has much to do with the powers of duality and survival. It is each individual choice that you make that will drive the outcome in either direction. We’re focusing on further solidifying our current foundation and then we plan to expand Diorion into many directions.

Want to help The Crow get on Steam? Be sure to rate it up on Steam Greenlight!

thecrow4 thecrow3

The post Take Flight and Restore Balance to the Animal Kingdom with <i>The Crow</i> appeared first on Leap Motion Blog.


Press Bird to Play: Pirate Mini-Games and Thieving Seagulls

$
0
0

Over the next several weeks, we’re spotlighting the top 20 3D Jam experiences chosen by the jury and community votes. These spotlights will focus on game design, interaction design, and the big ideas driving our community forward.

From the creator of LICHT little adventure, VR demo Press Bird to Play made a big impression thanks to its evocative atmosphere and engaging mini-games, landing 10th place. In today’s spotlight, creator Gerald Terveen talks about his old-school gaming inspirations, and upcoming work on a new title called VR Adventure.

PBTP uses a unique scaling mechanic: tiny hands, massive movements. What’s the appeal of this mechanic to you?

The longer one uses motion controls, the better you get at it. And just as many gamers like to up the resolution of their gaming mice, I like to reduce hand movements to a minimum. This doesn’t only help prevent “gorilla arm” from having your arms moving through space all the time, it also gives you a much deeper reach into the virtual world.

In PBTP, I always love to see new players getting surprised by the ability to reach objects far away or to touch the ceiling.

press-bird-to-play-coins

What’s your artistic process in designing virtual places that people will want to inhabit?

I think I use a different approach from most game designers because I’m not able to create my own models. Instead, I’m working with stock models I purchase. I treat Unity, my development platform, just like a box of Lego – where all the models I already bought get reassembled into the world I create. When I made LICHT, I only used the models that come with Unity, so the world was made from cubes and spheres.

Now I have a diversified collection of models from great artists all over the world that fit a fantasy setting similar to the games I played as a kid/teenager. By combining these assets, I can create my own unique version – for example, the elevators in PBTP use elements from five different artists.

press-bird-to-play-shadows

Much like your earlier work with LICHT, PBTP uses light and shadow in striking ways. How does this influence how people experience games?

I always felt that light and shadow are great tools to add a level of realism to games. They give a virtual world spatiality and objects a simple way of interacting with their surroundings. That and it’s just so much fun to play with in development.

Tell me about your vision and ambition for VR Adventure.

I’m not a trained game developer, but someone that just decided “now is the time” after the Oculus Rift Kickstarter was a success. So my vision of what I want to make and my ability to make it still have to close in on each other.

If I really get it done the way I would like it, then it will end up being a first-person Zelda: Ocarina of Time-like game with influences from Chrono Trigger, Final Fantasy VII, and Monkey Island. But it’s still too early to say for sure – right now, VR is in the early stages and we don’t even know what options will be available.

For now, I’m still creating smaller experiences to figure out the best ways to implement motion controls in VR while working on the world of VR Adventure on the side. Playing PBTP in first person, for example, is an interesting experience – but I have not yet solved the problem of navigating the character to my satisfaction without using a controller in addition to Leap Motion.

Want to watch VR Adventure unfold? Check out Gerald’s progress at vradventure.wordpress.com.

press-bird-to-play-intro

The post <i>Press Bird to Play</i>: Pirate Mini-Games and Thieving Seagulls appeared first on Leap Motion Blog.

Reach into the Uncanny Valley with the Augmented Hand Series

$
0
0

augmented-hand-series

“It’s a box. You put your hand in it. You see your hand with an extra finger.”—Visitor, 7


The hand is a critical interface to the world, tied into mind and identity.
Click To Tweet


The “Augmented Hand Series” (by Golan Levin, Chris Sugrue, and Kyle McDonald) is a real-time interactive software system that presents playful, dreamlike, and uncanny transformations of its visitors’ hands. It consists of a box into which the visitor inserts their hand, and a screen which displays their ‘reimagined’ hand—for example, with an extra finger, or with fingers that move autonomously. Critically, the project’s transformations operate within the logical space of the hand itself, which is to say: the artwork performs “hand-aware” visualizations that alter the deep structure of how the hand appears. An earlier version of this post appeared on flong.com.

The Augmented Hand Series is a real-time interactive software system that presents playful, dreamlike, and uncanny transformations of its visitors’ hands. Originally conceived in 2004, the project was developed at the Frank-Ratchye STUDIO for Creative Inquiry in 2013-2014 through a commission from the Cinekid Festival of children’s media.

The installation consists of a box into which the visitor inserts their hand, and a touchscreen interface which displays their ‘reimagined’ hand, altered by various dynamic and structural transformations. In the version shown here, in its premiere at the 2014 Cinekid Festival in Amsterdam, the kiosk is accompanied by a large rear-projection. The touchscreen allows participants to select among the different transformations.

Critically, the project’s morphological transformations operate within the logical space of the hand itself. That is to say: the artwork performs “hand-aware” visualizations that alter the deep structure of how the hand appears—unlike, say, a funhouse mirror, which simply distorts the entire field of view.


Visitor interactions

The system uses the real-time posture of the participant’s real hand as the moment-to-moment baseline for its transformations. Participants are free to use either of their hands, and the system works properly even with visitors who wiggle their fingers, or who move and turn their hand—within certain limits. The software may produce unpredictable glitches if the visitor’s hand differs significantly from a flat palm-down or palm-up pose. Currently, the system’s behavior for postures like fists (in which many of the fingers are occluded) is undefined, as is multi-hand interaction.

Developing an interaction that can work with a very diverse public is always a challenge. The Augmented Hand Series accommodates a wide range of hand sizes, from children (of about 5 years old) through adults, as well as a very broad range of skin colors. The system also performs robustly with hands that have jewelry, nail polish, tattoos, birthmarks, wrinkles, arthritic swelling, and/or unusually long, short, wide or slender digits. Nevertheless, the system’s behavior for individuals with more or fewer than five fingers is presently undefined. There are many more kinds of hands to support, and doing so remains an area of active research for the project.


Visitor interactions at the Cinekid Festival (continued)

About twenty different transformations or scenes have been developed. Some of these perform structural edits to the hand’s archetypal form; others endow the hand with new dimensions of plasticity; and others imbue the hand with a kind of autonomy, whose resulting behavior is a dynamic negotiation between visitor and algorithm. The videos here present live demonstrations of ten of these scenes:

  • Plus One: The hand obtains an additional finger.
  • Minus One: The hand has one finger omitted.
  • Variable Finger Length: The fingers’ length changes over time.
  • Meandering Fingers: The fingers take on a life of their own.
  • Procrustes: All fingers are made the same length.
  • Lissajous: The palm is warped in a periodic way.
  • Breathing Palm: The palm inflates and deflates.
  • Vulcan Salute: The third and fourth fingers are cleaved.
  • Angular Exaggeration: Finger adduction and abduction angles are amplified.
  • Springers: Finger movements are exaggerated by bouncy simulated physics.
Minus One: The hand has one finger omitted.   Plus One: The hand obtains an additional finger.
Angular Exaggeration: Finger angles are amplified.   Vulcan Salute: The third and fourth fingers are cleaved.

Many more transformations are planned. The sketches below depict some of the current scenes, as well as some as-yet-unfinished scenes conceived during the project’s proposal phase.

sketches

sketches

About

The hand is a critical interface to the world, allowing the use of tools, the intimate sense of touch, and a vast range of communicative gestures. Yet we frequently take our hands for granted, thinking with them, or through them, but hardly ever about them. Our investigation takes a position of exploration and wonder. Can real-time alterations of the hand’s appearance bring about a new perception of the body as a plastic, variable, unstable medium? Can such an interaction instill feelings of defamiliarization, prompt a heightened awareness of our own bodies, or incite a reexamination of our physical identities? Can we provoke simple wonder about the fact that we have any control at all over such a complex structure as the hand?

Neo (Matrix), Legolas (LOTR), and Captain Willard (Apocalypse Now), staring at their hands.
Neo, Legolas, and Willard, in dissociative states, look to their hands for cues to ground their perception of reality.

We know that the interrelations of hand, mind and identity are far from simple. Persons with alien hand syndrome, for example, have hands which move independently of their conscious will, as if they belonged to another person. By contrast, amputees suffering from phantom limb syndrome continue to feel their missing hand as if it were still there; their discomfort is sometimes relieved with a mirror box, which uses the virtual image of their intact hand to trick the mind and retrain the brain. Within this framework, the Augmented Hand Series can be understood as an instrument for probing or muddling embodied cognition, using a ‘direct manipulation’ interface and the suspension of disbelief to further problematize the mind-body problem. We see evidence of our instrument’s powers in the actions of young visitors who, uncertain whether to believe their eyes, peek into the box to double-check what is (not) really happening to their hand.

hands_seuss_disney_gould_665x230

The Augmented Hand Series is positioned to prompt, like Dr. Seuss (I wish I had eleven, too!) an empathetic acceptance of difference, and a recognition that there are many ways to be. Or could have been. In his essay “Eight Little Piggies“, the eminent biologist Stephen Jay Gould considers why we are shaped the way we are, and concludes—based on fascinating fossil evidence from some of the first land animals, some 400 million years ago—that there is nothing special or inevitable, nor even necessarily evolutionarily optimal about having five fingers on each hand. As Walt Disney discovered nearly a century ago, five fingers may not even be ideal for expressive gestural communication. From this perspective, the Augmented Hand Series is a work of participatory, transhuman biology-fiction that allows for the first-person exploration of these concepts.

Enraptured visitors

A wish to elicit and address wonder—our own, and others’—has been the deep common bond between Kyle, Chris and myself as we developed this project. Ultimately, as collaborators, our underlying personal motivations for developing the project are each quite different, but Kyle’s, which arise from his experiences as a lucid dreamer, are perhaps the most poetic:


Kyle discusses some of his motivations as a project collaborator.

A longer discussion of some of the ideas motivating the Augmented Hands Project can be seen in Golan’s presentation at the 2013 Eyeo Festival, beginning at 22’18”.

Recent Progress

Since this post was originally published last year, the team has created several new scenes, including fractal hands (by student Zach Rispoli), swapped thumbs, double thumbs, fewer knuckles, and extra knuckles. Here are some of these more recent experiments, straight from the depths of the uncanny valley. –Ed.

swapped-thumb

Screen Shot 2015-02-03 at 11.58.53 AM

Screen Shot 2015-02-03 at 11.59.23 AM

extra-knuckles

Press

Artist Biographies

Golan Levin explores the intersection of abstract communication and interactivity. Blending equal measures of the whimsical, the provocative, and the sublime in a wide variety of media, Levin applies creative twists to digital technologies that highlight our relationship with machines, expand the vocabulary of human action, and awaken participants to their own potential as creative actors. At Carnegie Mellon University, he is Associate Professor of Electronic Art and serves as Director of the Frank-Ratchye STUDIO for Creative Inquiry, a laboratory dedicated to supporting atypical, anti-disciplinary and inter-institutional research projects across the arts, science, technology and culture.

Chris Sugrue is an artist and engineer who develops interactive installations, audio-visual performances and experimental interfaces. Her works experiment with technology in playful and curious ways and investigate topics such as artificial life, gestural performance and optical illusions. She has exhibited internationally in such festivals and galleries as Ars Electronica, Sónar Festival, Pixel Gallery, Medialab-Prado, Matadero Madrid, and La Noche En Blanco Madrid. She teaches new media arts at The Parsons School of Design in Paris.

Kyle McDonald works with sounds and codes, exploring translation, contextualization, and similarity. With a background in philosophy and computer science, he strives to integrate intricate processes and structures with accessible, playful realizations that often have a do-it-yourself, open-source aesthetic. He enjoys creatively subverting networked communication and computation, exploring glitch and embedded biases, and extending these concepts to reversal of everything from personal identity to work habits. Kyle is a member of F.A.T. Lab, community manager for openFrameworks, and an adjunct professor at the NYU ITP.

Credits & Acknowledgments

cc_by_nc_sa_88x31

logos of sponsoring & supporting organizations

The Augmented Hand Series was commissioned by the Cinekid Festival, Amsterdam, October 2014, with support from the Mondriaan Fund for visual art. It was developed at the Frank-Ratchye STUDIO for Creative Inquiry at Carnegie Mellon University with additional support from the Pennsylvania Council on the Arts and the Frank-Ratchye Fund for Art @ the Frontier.

The Augmented Hand Series could not have been possible without several open-source C++ addons generously contributed by others in the openFrameworks community: ofxPuppet by Zach Lieberman, based on Ryan Schmidt’s implementation of As-Rigid-As-Possible Shape Manipulation by Takeo Igarashi et al.; ofxLeapMotion by Theo Watson, with updates by Dan Wilcox; ofxCv, ofxLibdc, and ofxTiming by Kyle McDonald; ofxCvMin and ofxRay by Elliot Woods; and the ofxButterfly code for mesh subdivision, by Bryce Summers. Adam Carlucci‘s helpful tutorial on using the Accelerate Framework in openFrameworks was also essential to achieving satisfactory frame rates.

The artists wish to specially thank Paulien Dresscher, Theo Watson, and the Eyeo Festival for their encouragement; and to Dan Wilcox, Bryce Summers, Erica Lazrus, Zachary Rispoli, Elliot Woods, Simon Sarginson, and Caitlin R. Boyle for their assistance in realizing the project. We extend additional thanks to the entire openFrameworks community; the staff of the Frank-Ratchye STUDIO for Creative Inquiry; Golan Levin’s Electronic Media Studio students, who served as beta-testers; Tim Hoogesteger of Cinekid; Rick Barraza and Ben Lower of Microsoft; Christian Schaller and Hannes Hofmann of Metrilus GmbH; Dr. Roland Goecke of University of Canberra; and Doug Carmean and Chris Rojas of Intel.

The post Reach into the Uncanny Valley with the Augmented Hand Series appeared first on Leap Motion Blog.

CAD Experiment: Disassemble a Spherical Robot in VR

$
0
0

What if you could disassemble a robot at a touch? Motion control opens up exciting possibilities for manipulating 3D designs, with VR adding a whole new dimension to the mix. Recently, Battleship VR and Robot Chess developer Nathan Beattie showcased a small CAD experiment at the Avalon Airshow. Supported by the School of Engineering, Deakin University, the demo lets users take apart a small spherical robot created by engineering student Daniel Howard.

Nathan has since open sourced the project, although the laboratory environment is only available in the executable demo for licensing reasons. Check out the source code at github.com/Zaeran/CAD-Demo.

AirshowDemo1

What’s the story behind your 3D CAD demo?

I got my Leap Motion Controller at the start of 2014, and quickly went about making demos. My first major project was Battleship VR, which was intended to investigate UI interaction with finger tracking. It can still be found on Oculus Share, although it’s quite old now. Aside from this, my major work with Leap Motion is my game Robot Chess, a relatively simple game which allows you to use your hands to pick up and move around chess pieces as you play against a robotic AI opponent. I’ve also built a couple of smaller proof-of-concept pieces that people may have seen on the leapmotion or oculus subreddits.

AirshowDemo2

Last year, I finished my IT undergrad at Deakin University, majoring in software development. I’m currently doing my honours, researching the feasibility of VR and haptics for midwifery training, using Deakin’s upcoming Virtual Reality Lab. I’ll also be building fun demos for their CAVE (cave automatic virtual environment) throughout the year. My current goal is to create an immersive fishing simulator, for when uni gets a bit too stressful!

Do you have any UX or design tips for developers looking to create similar experiences?

That’s a pretty broad question. Overall, I’d say to keep it simple. You don’t need fancy gestures or over-the-top animations, as it just complicates things for the average user. When it comes to grabbing things, the biggest issue I found was having objects that were large enough to be grabbed accurately, while at the same time allowing the entire model to fit in the workspace. I opted not to use magnetic pinch though, so grabbing accuracy was a much bigger deal as the pinch location had to be within an object’s bounding box.

When it comes to menus, I’m a pretty big fan of making buttons that feel responsive when you touch them, so personally I’d suggest a simple fade-in/fade-out with a bit of movement, as you don’t want your menus to take focus away from the main piece.

What do VR and hand tracking each add to the experience of seeing and disassembling a thing in mid-air?

The VR side of things puts the model directly in front of the user in 3D space, so they can easily see its true scale compared to viewing diagrams, or a rendering on a 2D screen. We found that by being able to view and manipulate the model in VR, there is a better understanding of how an object is put together, and potentially how it functions, depending on the detail of the model.

AirshowDemo3It also allows users to safely disassemble and observe objects which may be hazardous in the real world, or at high risk of breaking when being handled by inexperienced users. Furthermore, by using technologies such as VR and hand tracking, inexperienced users are able to easily pick up how to manipulate objects in the virtual space, and can learn about how components may fit together far easier than viewing a regular diagram.

What are the creative and design possibilities that a CAVE represents?

The most obvious benefit of a CAVE from a design perspective is the ability to walk around a 1:1 scale model of the object you are designing, potentially saving a lot of time and money by finding errors in the design before making physical prototypes. For example, you can walk through a building before it’s built, or test out the design of a new car without requiring clay modelling or scale models to be built. While these things can be done on consumer VR headsets, the CAVE provides a much higher graphical fidelity, and also benefits from being a multi-user experience.

Also, as CAVEs usually come with a form of tracking system (similar to motion capture), one concept I’m incredibly interested in is the ability to build and sculpt objects with your own hands at a high level of precision, instead of having to try and build something using a regular 2D interface and try to accurately model fine detail. Combined with a large-scale haptic device, you can actually reach out and touch a completely virtual object.

What will prototyping and design look like in 10 years?

The best example I can give is the short film World Builder by Bruce Branit.

From the moment I saw it, it’s been my go-to example of how CAD software will look in the future. It’s actually interesting to see that a few of the UI concepts in this film have inspired Leap Motion assets. All we’re really waiting for at this stage is a way to accurately track hand and finger movement in a space of a few cubic meters. Once that tech comes around, the floodgates are opened, and fields such as modelling, design, and prototyping become a lot more simple and fun.

However, if I had to give specific points:

  • High-end 3D design studios will make use of fully virtual environments, even potentially having dedicated room to VR and body tracking.
  • The ability to create and sculpt objects with your hands.
  • Collaboration – multiple people working in the same environment simultaneously.

What do you think of Nathan’s demo, and where will 3D design go next? Let us know in the comments, tweet Nathan @Zaeran, or check out his new website at nbvr.com.au.

The post CAD Experiment: Disassemble a Spherical Robot in VR appeared first on Leap Motion Blog.

Express Yourself! Augmenting Reality with Graffiti 3D

$
0
0

What if you could create art outside the boundaries of physics, but still within the real world? For artists like Sergio Odeith, this means playing tricks with perspective. Sergio makes stunning anamorphic (3D-perspective-based) art using spray paint, a surface with a right angle, and his imagination.

Creative 3D thinkers like Odeith should have the ability to use their freehand art skills to craft beautiful volumetric pieces. Not just illusions on the corners of walls, but three-dimensional works that that people can share the same space with. This was what inspired me to create Graffiti 3D – a VR demo that I entered into the Leap Motion 3D Jam. It’s available free for Windows, Mac, and Linux on my itch.io site.

sergio_odeith2sergio_odeith1sergio_odeith3

Why virtual reality?

While anamorphic graffiti is designed to feel three-dimensional, it’s created without the use of 3D modeling tools – just a set of spray cans. I imagine this has something to do with the fact that common 3D mesh creation tools require abstract mouse and keyboard interactions to manipulate anything, and involve heavy interfaces like this:

graffiti3d_12

To me, the magic of VR technologies stems from how they enable us to naturally interact with 3D content.  Applications that capitalize on the strengths of seeing and moving in 3D to unlock useful and otherwise impossible functions (such as inspecting models you wouldn’t otherwise be able to see, manipulating weightless objects, interacting with 3D puzzles that couldn’t exist in real life, etc.) are more interesting to me than applications that aim to induce the most “presence” possible.

In short, Graffiti 3D was meant to give people a completely new ability rather than make the user feel as if they’re in someone else’s shoes. Using the Leap Motion Controller’s image passthrough, you can create something from nothing, right there in your living room:


An early creation from Patrik Jensen, before I started using the Hovercast menu system.

The VR art community

The more I look around, the more tinkerers and artists I find with the same (or similar) vision. Here’s a small list of recent projects that are aimed at making creation in 3D more easy and accessible using VR technologies:


Tiltbrush by Skillman and Hackett takes an interesting approach to input by utilizing the mouse to paint on 2D planes arranged in 3D space. Some recent rumors imply that they’ve added support for the 6DOF controllers included with the HTC Vive headset.


World of Comenius by Tomáš “Frooxius” Mariančík includes a 3D painting component as well as a sandbox with discrete building blocks for creating spaces as you inhabit them.

MakeVR by Sixense uses STEM controllers to enable a range of mesh manipulation. Similarly, VRClay and Virtual Clay use Razer Hydras to sculpt meshes in 3D. Gravity Sketch uses a custom hardware “AR tablet” as the input device – essentially a flat plane and stylus are combined with control sticks for rotation and translation.

A bunch of other indie projects are also exploring the space, including Paint 42, VRtist, Magic VR, Tagged in Motion, and Graffiti Markup Language. And of course, though it’s not a VR app, Leap Motion’s Sculpting deserves a mention. All of these experiments bode well for the future of 3D design as an intuitive and deeply creative space. Our tools are finally starting to catch up with our imaginations.

That’s all for today – I hope you’ll check out Graffiti 3D and share what you’ve created. In the next post, we take a closer look at the development behind the demo, and what’s in store for the future.

Next up: Building Graffiti 3D: A Journey through Space and Design

Epilogue: Graffiti 3D on Twitch!

Recently, Scott joined us on our Twitch channel to talk about the development and vision behind Graffiti 3D, and where the VR art community is going in the years ahead:

For cutting-edge projects and demos, tune in every Tuesday at 5pm PT to twitch.tv/leapmotiondeveloper. To make sure you never miss an episode, enter your email address to subscribe to up-to-the-minute Twitch updates:



The post Express Yourself! Augmenting Reality with Graffiti 3D appeared first on Leap Motion Blog.

Building Graffiti 3D: A Journey through Space and Design

$
0
0

In yesterday’s post, I talked about the need for 3D design tools for VR that can match the power of our imaginations. After being inspired by street artists like Sergio Odeith, I made sketches and notes outlining the functionality I wanted. From there I researched the space, hoping that someone had created and released exactly what I was looking for. Unfortunately I didn’t find it; either the output was not compatible with DK2, the system was extremely limited, the input relied on a device I didn’t own, or it was extremely expensive.

graffiti3d_10
Above: one of my early sketches.

When I started designing this project, I chose the Playstation Move controller as my input device. It looked good on paper, but once I started working with it, I quickly realized that most of my time would be spent just trying to get the device to properly communicate with Unity and pair with the OS rather than having fun making stuff.

So I switched to the Leap Motion Controller and quickly got my hands in my application. Things went fast from here. All of the basic functions (drawing, changing color, changing size, and changing shape) were assigned to physical buttons in my original design, so I simply mapped each of those functions to a curl+ release of different fingers on the user’s hands. I did this because detecting whether or not a finger is extended is simple and robust, and it worked fine in limited user tests (three people).

Pinching the thumb of the right hand triggered drawing, pinching the thumb of the left hand changed the color, curling the index finger changed brush size, and curling the middle/ring/pinky fingers changed the brush shape. Here’s what it looked like in regular usage:

After putting it out in the wild, I found out some people were running into issues like the triggers accidentally going off as they were moving their hands out of view. Users were also telling me that they needed access to the “secondary” functions like undoing and clearing the canvas while away from the keyboard, which posed a much bigger problem because my system couldn’t accommodate an arbitrary number of settings.

I knew I needed to adopt a menu system of some sort. The new Arm HUD Widget by Leap Motion looked good, but I knew it wouldn’t be released for some time. Then I discovered Hovercast.

Hovercast

Hovercast opened up a world of possibilities within my application. Before integrating it, I didn’t quite appreciate how much the settings interactions impacted the utility of the application. Under the old paradigm, users were expected to specify drawing colors in a text file before opening the application (so they could cycle through them with by curling their left thumb). As a result, the colors rarely got modified by anyone; users just seemed to default to whatever presets came with the app. Now people can control the precise output color while in the game using sliders in Hovercast (that I mapped to hue, saturation, and brightness).

graffiti3d_11

The novelty of seeing the menu at your fingertips seemed to be enough to entertain some people. They would sit and happily poke buttons for minutes before trying to draw anything.

Lessons

I’ve learned so much since I jumped into this project that it’s hard to think of the most important lessons along the way. A lot of them are personal realizations surrounding marketing, versioning, QA testing, and generally being a “one man shop” for this application. For example, I released an “update” early on that horribly crippled the framerate of the application. After that, I started using the Unity performance analyzer to look at the number of draw calls made each frame during high load before releasing builds.

Some of the most fun lessons I’ve learned so far have related to the basics of 3D drawing in general. I’ve learned that my usual approach to drawing stuff in 2D (using outlines of cross sections of the object I want to depict) is inadequate for 3D. I have realized that I have to think in volumes more than outlines. For example, here’s how I would normally draw a hand in 2D:

graffiti3d_8

Looks great! But if I drew this in Graffiti 3D, I’d end up with something like this:

graffiti3d_4graffiti3d_2

My initial expectation was that the best way to draw precisely in 3D would be to start with a cross-section like usual, and then draw perpendicular cross sections to “fill out” the volume. That isn’t too bad, but it’s extremely labor intensive and ends up looking a bit shaky, sloppy, and inconsistent.

graffiti3d_3graffiti3d_1graffiti3d_9graffiti3d_5

The above hand didn’t turn out too bad (although my initial outline could have been much better). However, it took me a few minutes of careful drawing to create it and the result was very scribbly and noisy. If, instead of focusing on cross sections, I focused on filling the space that would be taken up by the fingers and thickness of the hand, I could produce something much better looking much faster. This is what I did in about 30 seconds with a brush the width of the fingers (so each finger is one brush stroke):

graffiti3d_7graffiti3d_6

A bit better, and much faster than the first one. I’m not used to thinking like this for 2D art because I don’t have dynamic control over the width of the pencil tip when I’m doodling on paper. Paintbrushes of varying thicknesses are probably the best 2D analog to this.

I’m still not an expert at drawing in 3D, but I’m getting a lot better. There’s a lot of new muscle memory that is required for making the most clean, expressive strokes and “gorilla arm” is a challenge early on. My arm is getting less tired the more I use it though.

Some of the issues I have with doing detailed work are due to limitations in the current generation of Leap Motion hardware – the tracking volume is small, and hand pose estimation breaks up near 3D objects in space – but it is surprisingly precise, low latency, and solid. Others issues are due to limitations in my human hardware; sometimes I just can’t hold my arm still enough to get super fine details correct. I think over time with practice that will get easier.

Designing with Leap Motion

I’ve learned some good lessons about working with the Leap Motion Controller while developing this project. Overall, the process of getting up and running with the device was very straightforward, because the Unity assets can be dropped into a project and immediately start working, and the documentation is great. Where I did run into issues was in utilizing more experimental functions, such as the passthrough quads.

When I began working with passthrough mode in my application, I ran into some issues properly sizing and placing the passthrough quads (the surfaces that display the sensor image to each of the user’s eyes) so that the image aligned with the user’s in game hands. I eventually solved the problem after some trial and error, and I hope my forum post about it helps other people avoid wasting time to figure it out.

Passthrough mode also occasionally caused the user’s virtual hands to misalign with the sensor image. It was one of those bugs that would rear its head at the worst times (such as while demoing to friends/family) and then disappear later when I was trying to reproduce it. Ultimately, someone pointed me to a thread on the Leap Motion forums illuminating that this problem is caused by “robust mode” kicking in sometimes and the resolution of the sensor image changing, so the solution was just to disable robust mode in the Leap Motion Control Panel. (Also, don’t forget to enable “allow images” before experimenting with AR passthrough, or you’ll end up just seeing a solid gray color when you enable the passthrough quads!)

What’s next?

This is a huge design space, and the tricky part is choosing what to work on next. Users have suggested things ranging from refining the tool to be more like “the Photoshop of 3D” to incorporating the drawing function into some sort of game mechanic like this.

At this stage, I’m most interested in keeping the application simple and focused on freehand 3D art while making it more useful and breathtaking. The dynamic mesh generation algorithm isn’t as smooth as it can be, but I’ve avoided a lot of post-processing for now in the interest of performance. While users can only currently paint with different colors of a “toon shader” material (meant to mimic cell shading), you might imagine painting with custom materials including wood, porcelain, glass, metal, drippy goo, rock, smoke, skin, grass, etc.

The possibilities really get crazy when you consider all of the effects and structures that can be generated procedurally in a game engine like Unity. Users could release glowing worms that oscillate in the direction of the instantaneous velocity of their movement and pulse colors to the music they’re listening to. Or fill a volume with vegetation that grows over time. Or use a “city” brush to draw roads and watch buildings grow around them.

Following  this train of thought, this tool could be useful for large scale map/space creation in addition to small-scale sculpting. Projects like Paint 42 that include an option to enlarge/resize meshes have me extremely excited because they demonstrate the architectural possibilities of this idea. Users can draw small columns and turn them into skyscrapers with the push of a button. I don’t think that’s ever been possible for anyone but the lucky few designers who had access to good VR tech before this recent boom occurred.

Vitally, Graffiti 3D needs to be freed from the cables and positional head tracking camera FOV. Walking freely around a piece, in a fully interactive AR room, unobstructed by tracking range or cables, is going to feel like something else. From there, people’s creations need to be tied to the real world. We should be able to freely modify our spaces by leaving virtual 3D material anywhere. If I’m walking through a park and I think a gazebo or sculpture would be beautiful somewhere, I should have the ability to draw it there for anyone to see.

These are just a few of the thoughts swirling in my head about what the future could hold for this application (and similar ones). It’s an exciting time to be alive.

Epilogue: Graffiti 3D on Twitch!

Recently, Scott joined us on our Twitch channel to talk about the development and vision behind Graffiti 3D, and where the VR art community is going in the years ahead:

For cutting-edge projects and demos, tune in every Tuesday at 5pm PT to twitch.tv/leapmotiondeveloper. To make sure you never miss an episode, enter your email address to subscribe to up-to-the-minute Twitch updates:



The post Building Graffiti 3D: A Journey through Space and Design appeared first on Leap Motion Blog.

Fingerpainting Soundscapes: Muse for Leap Motion and the Berklee Symphony Orchestra

$
0
0

Leap Motion soloist? It’s not as strange as it might sound at first. At a recent performance of the Berklee Symphony Orchestra, Muse co-creator Dr. Richard Boulanger played alongside classical horns and strings – in a composition specially written for his virtual musical instrument.

Available for Mac and Windows on the Leap Motion App Store, Muse is the brainchild of Boulanger’s friend and long-time collaborator BT, a Grammy-nominated composer who wanted to build tools that could match his imagination. We asked Dr. Boulanger about the Muse project and what it was like to bridge the digital and analog worlds of music with Symphonic Muse. We’ve also included some really cool videos from Dr. Boulanger’s students, who often develop with the Leap Motion Controller for their thesis projects.

When did you start experimenting with alternate musical interfaces?

We often think of the computer as an “appliance” or a “virtual recording studio” or a “versatile and mutable production tool,” but to me that sells quite short the potential of the most beautiful and soulful “instrument” of our age. I’ve been working with synthesizers and computers since the early seventies, when ARP Instruments founder Alan R Perlman commissioned my first symphony. Given that I am a classical and folk guitar player, he also gave me one of their Avatar Guitar Synthesizers that I used in many performances and concerts.

All of these early electronic instruments allowed for some sort of “alternate” form of control – with microphone inputs, and envelope followers, and pitch-to-voltage and pitch-to-MIDI converters. But it wasn’t until my PhD research work on SoundFile-Convolution at the UCSD Computer Audio Research Lab that I truly began working with 3D gestural controllers.

At UCSD, I became good friends with Dr. Max Mathews from Bell Labs – considered “the father of computer music.” My thesis advisor and boss, F. Richard Moore, was Mathews’ assistant at Bell Labs when working on the GROOVE system. I spent a lot of time with him, and composed an international-award-winning composition for the electronic violin:

After that, Max and I became great friends and lifelong collaborators. He built me many custom versions of his wonderful Radiodrum and Radio Batons. These are very much like working with the Leap Motion Controller. Alternative controllers allow us new and intuitive ways to “play” the computer; to play sounds themselves and reveal, control, and communicate their inner life and beauty.

With intuitive and expressive controllers like the Radio Baton or Leap Motion, we can do so much more than “press play” on our computers, we can actually learn to play it and use it as an extension of our inner selves.


Leap Motion + Processing by student Chatchon Srisomburanonant

How did you get involved with the A3E conference and the Berklee Symphony Orchestra?

One of my former Berklee Music Synthesis students was helping to organize a new conference that would reflect an new emerging paradigm – one in which artists, performers, and composers are collaborating closely with developers to innovate and invent the apps, technology, and audio-art of the future. They knew of my work on Muse with BT and felt that this truly represented the new innovation model that they were focusing upon. So they asked me to do one of the keynotes with BT and to focus on Muse. They were also planning a big concert in the Berklee Performance Center that would feature innovative composers/developers/performers.

Berklee asked if I would like to compose something using Muse for The Berklee Symphony Orchestra under the direction of Francisco Noya (and featuring the principal French Horn player from the Boston Symphony, Gus Sebring) to open the concert. I had been writing short chamber pieces for Muse and cello as well as Muse and voice, and had performed them in Boston, NYC, and Spain. But this was my chance to really push the envelope. Both BT and I envisioned that Muse would be a great tool for film and TV composers and so I wanted to write something that would hopefully inspire that creative community.

What is it like to integrate Muse into a live symphony?

It’s a unique challenge to integrate any electronic instrument, track, or sonic element into a live performance. Getting the levels right between the live and the electronic, keeping the live instruments and the electronics in time with each other, and keeping the live instruments and the electronic instruments in tune with each other – these are just a few of the challenges that come to mind.

The way that Muse and Leap Motion address and solve these potential issues is the fact that the controller actually lets me “play” Muse in time. We have controls in the system that allow me to change the key and volume on the fly. Also, we have several “presets” that I could call up in the performance that bring in new samples, new soundfiles, new synthetic sounds, and new sets of arpeggiated chords. Finally, if I need, I can totally replace any of the built-in sounds, notes, and chords. Everything is in place in the program so that I can follow, lead, or blend in. There’s harmonic and sonic variety in each screen and in the 3 preset soundsets to allow me to tell a pretty nice story musically.

This was an underlying goal in the design of Muse. I wanted a program that would be easy to play so that my granddaughters and my mom and dad could have fun with it, no matter what they did. But it also needed to present the musical intelligence that would appeal to my Berklee students and my colleagues around the world.

How did Muse inspire the composition?

Symphonic Muse started with Muse – it’s not a composition that I wrote in advance and then added some electronic sound effects afterwards. Muse has the built-in ability to “record” the user’s improvisations and performances. I would practice with the program. I would record all my improvisations and practice sessions. Finally, I created an underlying framework for the piece, in Muse only and dropped that AudioFile into a track on my Digital Audio Workstation (DAW) Logic Pro. Around this framework, I began to write the orchestral parts and develop some of the themes. One could almost think of the orchestral parts of Symphonic Muse as the “accompaniment” for a piece that I composed in Muse itself.

What role has Leap Motion played in your curriculum with your students at Berklee?

I have two classes that focus on the Leap Motion – Circuit Bending and Physical Computing, and Composing and Performing on Wireless Mobile Networks and Devices. It’s often the case that when students are doing their final thesis projects with me, that they continue to use and develop for Leap Motion. (Their project videos are sprinkled liberally throughout this post! —Ed.)

Another member of the Muse development team, Tom Zicarelli, uses the Leap Motion Controller in his DSP, Max/MSP, and Jitter classes – encouraging the students to build interactive processing and audio-reactive visual systems. (I should also mention Paul Batchelor and Christopher Konopka, Berklee Electronic Production and Design graduates, for their essential contributions.)


Leap Motion, iPad, and Csound-Based Audio FX Processor by student Nicholas Martins

How do you envision this integration evolving over time as the technology expands?

I sponsor concerts each semester that feature the students performing with gestural devices – commercially available ones like the Leap Motion, Nintendo WiiMote, or HotHands, or even built-in video cameras. I also collaborate with faculty in the Berklee Music Therapy department. My students and I have developed many “hands-free” and “smart-systems” for them to use in clinical settings.

That work has been incredibly inspiring to all of us. It’s life-changing to use the Leap Motion Controller to release the inner music of a severely handicapped child and let them play together with each other and other musicians. There is a huge future in this area as well. I think that our understanding of the healing role of music is changing and that these technologies will be the key that will unlock many new breakthroughs.

What drives you to believe in gesture control as a compelling vehicle for musical composition and performance?

Gesture controllers let me literally bring the computer into the chamber ensemble, choir, and orchestra as the solo or ensemble instrument of the 21st century. I am not using the keyboard to replace the live trumpet section or string section from the recording session on stage. Rather, I am trying to add new colors, new roles, and ultimately take music into new areas.

From the first moment that I had my hands on a modular synthesizer back in 1969, I have always dreamed of being able to “finger-paint” soundscapes – to sculpt and shape sounds in a very intuitive and fluid way. The Leap Motion Controller, especially combined with the underlying power of Csound, makes this possible today. It will take me the rest of my life to fully develop the repertoire that shows how powerful, beautiful, tender, passionate, and dramatic that SoundArt or AudioArt can be. I continue to work toward that end – on the app level, on the design level, and on the musical level.


Leap Motion + Csound by student Mark Jordan-Kamholz

Your work pays reverence the past, the present, and the future in terms of disciplines and technologies that weave into it. In what ways do you emphasize the importance of straddling temporality to your students?

I was educated and trained as a classical composer and performer, and did a lot of performing in the NEC chamber chorale with the Boston Symphony. This culminated, in some way, with my singing Beethoven’s 9th at Carnegie Hall under the great Seiji Ozawa. But when growing up, I also played in a lot of bands, in a lot of clubs and coffeehouses, and at a lot of weddings. And they were great too!

I’ve always loved all sorts of music and all styles of music. That’s why I think I have been a pretty good fit at Berklee, where I’ve been on the faculty for more than 28 years now. It always made me happy to play, sing and share, through music, my joy for life and my songs. Now, as controller, video, audio, and sensor technology advances, I am able to share some of my other visions about sound and performance – through apps like Muse.

Instead of listening to a song or track of mine on iTunes, with the Leap Motion Controller and the Muse app, anyone is able to compose and perform and capture their own songs that are, in some way, influenced by my aesthetic. In a way, through Muse and this expressive interface, users are collaborating with BT, Dr.B. TomZ, TomS, ChrisK, and PaulB on “our” compositions – and that is quite exciting and quite new.

Are there any new Leap Motions apps on the horizon from Boulanger Labs?

We are developing a Leap Motion app called Catch Your Breath at Boulanger Labs that will allow the user to record (or import) audio and transform it in dramatic ways by opening and closing their fingers and moving their hands left, right, up and down. It’s going to be very fun – like an audio version of the Fun House Hall of Mirrors. Stay tuned.

Dr. Richard Boulanger (a.k.a. Dr. B.) has conducted research in computer music at Bell Labs, CCRMA, The MIT Media Lab, Interval Research, Analog Devices, and IBM. He is now a Professor of Electronic Production and Design at Berklee. Learn (and listen to!) more at boulangerlabs.com or his Vimeo channel.

The post Fingerpainting Soundscapes: Muse for Leap Motion and the Berklee Symphony Orchestra appeared first on Leap Motion Blog.

Planetarium’s Source Code is Now Public on GitHub

$
0
0

This week, we’re happy to announce that the source code for Planetarium is now available on GitHub. It’s been an incredible project so far, and our team is excited to continue developing our core Widgets for VR experiences.

In the Twitch episode at the top of this post, Daniel and Barrett talk about the development process behind Planetarium – including the challenges of VR UX and UI development, how we built the planetarium and foundational Widgets, designing Arm HUD and Widget scaffolding, our roadmap for the future, and more.

Want to dig even deeper? Be sure to check out the team’s recent Developer Diaries series, starting with Introducing Planetarium: The Design and Science Behind Our VR Widgets Showcase.

The post Planetarium’s Source Code is Now Public on GitHub appeared first on Leap Motion Blog.


VR Trauma and 3D Design @ LA Hacks 2015

$
0
0

This weekend, Team Leap Motion made the trip from San Francisco to join over 1500 students at the Pauley Pavilion for LA Hacks. Amidst the sleeping bags, Red Bulls, and bleary-eyed jam sessions, we watched as hundreds of hacks came to life. Here were just a couple of the highlights from the weekend:

Patient[n]: A Case Study in Autonomy

patientn1patientn13

Play as Patient[n] undergoing undergoing post-trauma therapy in a medical rehabilitation center. The demo lasts for roughly 6 minutes and features some dark twists.

LeapCAD

mayaleap

3D design always gets us excited at Leap Motion, and this experimental Autodesk Maya 2015 integration made a big impression.

Map Motion

mapmotion

By integrating the LeapJS and Google Maps APIs, Team Map Motion was able to bring a variety of hand gestures and features to exploring the world.

Want to see more hacks from this weekend’s competition? Check out the full suite of hacks and demos on ChallengePost, or see some of our highlights on Twitter.

The post VR Trauma and 3D Design @ LA Hacks 2015 appeared first on Leap Motion Blog.

Technicolor Absurdity and Bullet-Riddled Mayhem: Blue Estate is Back!

$
0
0
Earlier today, indie studio HE SAW launched the full version of Blue Estate, the darkly funny rail shooter based on the critically acclaimed comics series. Featuring hours of new gameplay, new enemies, and the most ridiculous mob bosses you’ve ever seen, the game is now available on PC for the Leap Motion Controller on our App Store.

DOWNLOAD NOW

With its wicked wordplay and gunpowder atmosphere, Blue Estate Prologue made a big first impression when it launched for the Leap Motion platform in 2013. Since then, HE SAW has continued to build on their adaptation of the Eisner Award-nominated comics series. With releases for Xbox One and PlayStation 4, Blue Estate is making its big return to the PC with a fully realized game.

BE-Banner-1200x628-5

Across seven levels of mayhem, players fight through a variety of rival gangs and outlandish locations. The story is about Tony Luciano, the son of a powerful Mafia don who starts a war with a rival gang after his girlfriend is kidnapped. The situation quickly gets out of control when Eastern European mobsters steal a million-dollar ransom for Tony’s father’s favorite racing horse. Along the way, a dishonorably discharged ex-navy SEAL is brought in to resolve the situation – no matter who has to die.

Motion controls are a fundamental part of the Blue Estate experience, as you shoot your enemies by pointing at them, take cover by spreading out your fingers, and reload by swiping your finger down.

Next Tuesday at 10am PT, the creators of Blue Estate will be featured on our Twitch channel for a live interview and play session. (We’ll also be rebroadcasting the episode at our regular 5pm time.) This is a unique opportunity to dive into the minds of game developers who are exploring the limits of motion controls for the first-person shooter genre. Tune in and sign into Twitch to ask your questions in the live chat!

BE-Banner-1200x628-2

Epilogue: Twitch TV

For your viewing pleasure, here’s our feature interview with Viktor Kalvachev:

For cutting-edge projects and demos, tune in every Tuesday at 5pm PT to twitch.tv/leapmotiondeveloper. To make sure you never miss an episode, enter your email address to subscribe to up-to-the-minute Twitch updates:



The post Technicolor Absurdity and Bullet-Riddled Mayhem: Blue Estate is Back! appeared first on Leap Motion Blog.

Leap Motion vs. Gloves: New Medical Study

$
0
0

From drinking your morning coffee to turning off the lamp, you use your hands thousands of times a day. It’s easy to take for granted – until your hands don’t cooperate. To help people rehabilitate from strokes and hand tremors, doctors and researchers are doing some really amazing things with off-the-shelf hardware.

In a recent presentation for the Society for Neuroscience Conference, three researchers from UCSF stacked the Leap Motion Controller against two different data gloves to help assess people who suffered from stroke. They believe that the Leap Motion Controller could play a key role in how doctors diagnose and treat a variety of brain disorders – even during live surgery.

Leap Motion vs. Gloves

For this study, the researchers wanted to see which technology would be best at measuring joint angles at the knuckle between the hand and the finger – what’s known as the metacarpophalangeal (MCP) joint. The study compared three different devices that provide fingerbone-level tracking: the Leap Motion Controller, 5DT Data Glove, and DG5 VHand 3.0.

data-gloves

At the beginning of the study, users were videotaped wearing each device, moving their hands to a variety of angles. At the same time, a behavioral program sampled the entire finger movement. Later, the recordings were used to calculate the actual angle of the MCP joint, and the researchers designed a Flappy Bird-style game that users were able to try with each device.

While you can see their full findings in this poster that they developed for the conference, here’s a summary of their experience from the study:

performance-contrasts

According to Dr. Jason Godlove, a postdoctoral scholar in the UCSF School of Medicine’s Neurology department:

“While ultimately we were unable to use the device with our stroke patients because their hands were too clenched, Leap Motion was instrumental in quickly and easily designing the software. If you want to look at subjects with abnormally closed hands or working with grasping large objects, go with the 5DT data glove (the more expensive option). If you are looking at hand tremor, rehabilitation software design, or just about everything else, Leap Motion is the way to go.”

What’s Next: Live Diagnosis During Surgery

While Leap Motion technology is already being used in operating rooms for navigating medical imagery, the team at UCSF wants to explore its diagnostic potential during live surgery. Specifically, to measure the hand tremors of Parkinson’s patients and compare them to brain activity:

“The goal is to record from the brain while the subject moves their fingers and makes specific hand gestures to learn about the brain encodes hand kinematic information. Leap Motion is key to the study because during human surgeries, researchers only have about 10 minutes to set up and perform the research task. We need a device that is able to record finger joint angles and hand kinematics accurately.

Since Leap Motion doesn’t require any calibration or wearable apparatus, it’s the ideal device to use in the surgery room, because it doesn’t create much disruption during the actual surgery.”

Where do you think hand tracking could make a real difference? Let us know in the comments.

Image credit: Unknown

The post Leap Motion vs. Gloves: New Medical Study appeared first on Leap Motion Blog.

What Would a Truly 3D Operating System Look Like?

$
0
0

Hand tracking and virtual reality are both emerging technologies, and combining the two into a fluid and seamless experience can be a real challenge. This month, we’re exploring the bleeding edge of VR design with a closer look at our VR Best Practices Guidelines.

Jody Medich is a UX designer and researcher who believes that the next giant leap in technology involves devices and interfaces that can “speak human.” In this essay, she asks how a 3D user interface could let us unlock our brainpower in new ways.

As three-dimensional creatures, humans need space to think. Many parts of our brains contribute spatial information to a constantly evolving mental map of our surroundings. This spatial memory enables us to understand where one object is in relation to another, how to navigate through the world, and provides shortcuts through spatial cognition. In turn, this frees up more working memory or short-term memory – the faculty that provides temporary storage and processing power for the task at hand.

Why Space?

Spatial Semantics. Physical space allows users to spatially arrange objects in order to make sense of data and its meaning, thereby revealing relationships and making connections. Imagine a furious ideation sticky-note session. As participants add data to the wall, sticky notes appear in thematic groupings spatially across the board. Up close, we can see the individual interrelated data points. From a step back, we gain perspective on the overall structure of information. The way the space is organized provides the semantic structure we need to make sense of the information. This is true for sticky notes as well as for our rooms, our homes, our cities, and the world at large.

ux-design-864

External Memory. Allowing space for external memory compensates for humans’ limited working memory, allowing people to see more detail and to keep information accessible and visually available. The note to buy milk on the fridge, the family photos stuck in the mirror, and putting “must remember” items near the car keys are all examples of spatial external memory.

Dimension. Without thinking, we can immediately tell the difference between two objects based on dimension and other cues. Through their dimensionality, we can innately understand information about either object without having to use much working memory in the process.

Problem: 2D Computing is Flat

With modern operating systems, interaction designers create shells based on a “magic piece of paper” metaphor. Essentially, this means that the OS works like a series of 2D planes that switch, slide, or blend into each other.

Because there is no spatial memory – no spatial cognition of the digital space – the user must expend their precious working memory.

Unfortunately, this creates a very limited sense of space and effectively prevents the development of spatial cognition. While smartphones and tablets have made attempts at spatial organization systems with “carousels,” the map space is limited and does not allow for productivity scenarios. For instance, I cannot work on large presentations or content creation on a tablet, as the OS is not extensible to those types of tasks.

Contemporary desktop shells are even more nebulous and do not provide opportunities for spatial cognition – forcing users to spend working memory on menial tasks. Organization is chiefly based on filenames rather than spatial semantics, while properties are mapped only in one dimension at a time. This makes it impossible to tell the difference between items based on multiple dimensions, and severely limits opportunities to visually sort, remember, and access data.

In practice, this complete lack of spatial mapping demands cognitive-heavy task switching from users. Because there is no spatial memory – no spatial cognition of the digital space – the user must expend their precious working memory. It is up to the user to understand how the data has been structured and how to retrieve it. Each user must develop workarounds to quickly access files and move between seemingly related tasks (e.g. alt-tab, naming conventions, etc.).

As a result, every interaction with the OS is an interruption, often requiring many traversals to achieve a goal. These include:

  • Launching a new app
  • Closing an app to move to another activity
  • Finding an item
  • Accessing the file browser
  • Changing windows across apps
  • Actions that cause a new window/screen in an app
  • Notifications/conversations

These interruptions are extremely costly to productivity and flow. Throughout the workday, the average user switches tasks three times per minute, and once distracted, it may take anywhere from 30 seconds to half an hour to resume the original task. If every OS interaction represents an interruption, whether great or small, imagine how much collective time is lost to overcoming technological interfaces.

Opportunity: Bringing Spatial Cognition into VR

Based on Hick’s Law (1952), any interface is vastly improved through the reduction of the number of choices, thereby improving the signal-to-noise ratio [expressed as T = blog2(n + 1)]. Likewise, reducing traversal time between objects will naturally improve efficiency (Fitt’s Law). With the rise of augmented and virtual reality (AR/VR), this can finally be achieved by providing opportunities for spatial cognition.

AR/VR is inherently spatial, offering a much larger and richer surface for the spatial arrangement of tasks. And spatial memory is free – even in virtual worlds.

Even now, we are seeing marked increased productivity on larger screens, which allow users to spatially arrange tasks. Czerwinski et al. demonstrated that spatial tasks were significantly improved for women on displays with large fields of view, with AR/VR providing the ultimate open space.

In general, the more space users have available, the more windows and tabs they can open; multi-tasking with a turn of the head rather than a cognitively heavy interaction with the OS. As Andrews et al. point out, “comparisons can be done visually, rather than relying on memory and imperfect internal models.” Meanwhile, Ball et al. proved that physical interaction further improved the way users understand and use virtual space, just as it does in real environments.

So how do we accomplish this? Let us start by building on a digital property that already has an underlying spatial system: the browser.

Desktop Space and the Browser

The modern Internet browser is a digital task-switching haven, designed to allow users to access and explore vast amounts of content. For that reason, it already has a baseline spatial structure, built on the tab (resembling a card) and window (deck of cards).

Spatial Semantics

Users combine tabs spatially, grouping like tabs into windows. Within groupings, certain tabs, such as search engines, social networks, and content providers, act as launchers for new tabs. These tabs load to the right of the launcher tab, but before the next launcher tab – creating a spatial structure from left to right, with the tab generators as landmarks. The resulting spatial map provides a sort of timeline, and a method for keeping track of content, as tabs allow users to:

  • spatially arrange their content
  • put aside a piece of content to revisit later
  • set reminders for necessary tasks/activities
  • keep their place in a document, even when branching from the initial window
  • engage in parallel browsing across multiple tabs while maintaining multiple back stacks (each tab has its own history)
  • group similar tasks and tabs for sub-tasks (e.g. one window with multiple social networks or emails open)
  • leave page open for a long time over multiple sessions with the intention of returning to them.
  • use greater screen space to open more tabs.

The tab was a major step forward in the evolution of the browser, largely replacing the Back button and opening up new possibilities for content exploration. This is because, unlike abstract pages lurking in the browser history, tabs have spatial presence:

  • The back button can require too many (or an unknown number) of clicks to return to a desired page.
  • While an open tab maintains state, the back button requires the page to reload.
  • The browser history (from right-clicking on the Back button) requires users to navigate via link name, while tabs allow users to navigate via spatial relationship or visual browsing.

As mentioned previously, the restricted space of mobile device screens often inhibits our ability to access spatial cognition. This issue is just another example – on mobile devices, where tabs are not available, users rely heavily on the back button and new windows. This slows down their ability to navigate between various pages.

vr-browsing-3d-2

External Memory

Like many people, I leave tabs open like “don’t forget” notes on a mirror. These tabs are important – reminding me of tasks I need to do, articles I want to read, and funny videos that I will never get around to watching. Browsers often serve as a user’s active memory, and so it is very important that users be able to easily and accurately jump to any given element quickly and reliably.

Studies show that the more windows and tabs a user has open, the more important the spatial relationships become. Temporal-based interactions (e.g. alt-tab) are far less helpful than spatial consistency even in today’s limited digital space, and spatial consistency in the configuration of tabs encourages re-visitation – even three months after use.

vr-browsing-3d-3

My browser history for today in Firefox.

The browser has an excellent spatial system, and yet when I look at my browsing history, I see a mess of links that are all given the same illegible data. As noted earlier, thanks to the emergence of tabs, many users engage in parallel browsing – following multiple strains of thought in different windows or tabs.

This generates a hodge podge history of activity, which is a nightmare to see in a single dimension like the one above. All the spatial data is lost along with the visual representation of the page, and all that is left is a short description and URL.

VR Space and the Browser

With AR/VR, we have the opportunity to increase and improve spatial cognition in the browser by developing a stronger spatial system and allowing for dynamic data dimensionality. With a strong sense of space, the user can quickly set up spatially optimized task flows. AR in particular creates opportunities for users to map their virtual spatial systems to their real ones – opening up rapid development of spatial cognition. In both cases, however, we have a theoretically infinite canvas to spread out.

vr-browsing-3d-4

Spatial Semantics

The key to a successful spatial browser is a strong baseline grid. To lean on users’ existing expectations based on over a decade of tab browsing, we can maintain the existing “launch tab to right” pattern. At the same time, allow users the full reach of their space to organize data into spatially relevant areas using simple drag-and-drop interactions over that baseline grid. Regardless of dynamic reshuffling of space, it is essential that this canvas retain the spatial location of each item until specifically altered by the user.

External Memory

With this spatial consistency, the user can maintain “memory tabs” and return to them through spatial memory. This also helps the user create muscle memory for frequent tasks and activities.

Print

Dynamically re-sort objects spatially to reveal meaning. Sort methods clockwise from top left: timeline, alphabetical, type, user specified.

Dynamic Spatial Semantics

Now that the user can always return to their baseline spatial system, we can capitalize on the digital power of data by providing dynamic spatial semantics. Two projects from Microsoft, Pivot and SandDance, demonstrate the power of dynamic movement between data visualizations to reveal patterns within data. The animated transitions between the views help users understand the context.

vr-browsing-3d-6

Dynamic Dimension: ABC x Time

vr-browsing-3d-7

Dynamic Dimension: Recency x Frequency

Dynamic Dimensionality

However, both Pivot and SandDance were developed for screens – a 2D environment. While this reaches the limit of today’s shells, AR/VR offers us the opportunity to create 3D intersections of data visualizations. In other words, the intersection of two 2D data visualizations providing a 3D sense of data dimensionality. Data is given a dynamic volume as defined by the values of the intersecting graphs.

In practice, one application of this approach would be that items most related to the two visualizations become large and nearer to the user, while items that are not likely to be relevant fall away. In this way, by quickly looking at the dimensions involved, the user can instantly understand the difference between various items – just like in the real world.

Conclusion

The ability to see and understand the world in three dimensions is an extraordinarily powerful part of our everyday experience. In many ways, this has been lost with traditional digital interfaces, as UIs are inevitably shaped and limited by hardware capabilities. By unlocking the third dimension, VR/AR opens up the opportunity to combine spatial-cognitive tools and experiences with the raw power and infinite malleability of the digital medium.

The post What Would a Truly 3D Operating System Look Like? appeared first on Leap Motion Blog.

Can VR Change How Your Brain Works?

$
0
0

Vivid Vision thinks so, and they want it to help millions of people. Formerly known as Diplopia, they believe that VR can help treat common vision problems like lazy eye and cross-eye, which happen when the brain ignores input from the weaker eye. Their solution – a VR experience that combines medical research with gameplay mechanics – is now rolling out to eye clinics around the USA.

Recently, we caught up with Vivid Vision co-founder James Blaha to ask him how he’s retraining people’s brains using VR and hand tracking technology. You can also see James later today at 5pm PT on our Twitch channel, where he’ll be demoing Vivid Vision live and taking your questions.

Every tech startup wants to change the world, but it’s not often that they want to change people’s brains. What does that mean for you?

How you perceive the world is a very personal thing. I was never sure what 3D vision was supposed to look like, which left me wondering what I might be missing out on. We’re trying to improve how people perceive the world around them so they can do the things they love.

What are the challenges in bringing “flow” and fun gameplay to a medical application?

It’s very challenging to balance the requirements of the training with fun game mechanics. We think that making the game fun, and getting people into “flow” where they are just reacting and enjoying themselves is critical to the success of the training. We design every game by starting with the relevant vision science first, and try to incorporate game mechanics that fit well with the visual tasks that need to be completed.

A screenshot from Vivid Vision.

What’s the role of hand controls in the game? What does it offer players?

Hand-eye coordination tasks are very important when it comes to usefully applying depth perception. There is some evidence that having people do motor tasks during perceptual learning increases the rate of learning. Disparity cues are largest and most useful at closer distances.

We don’t want to just increase a person’s ability to score well on an eye test, we would like their improved vision to translate into their daily life. Using hand tracking, we can force people to judge depth with both eyes and reach out to exactly the distance they need to. We can have people learning to catch and throw naturally.

The second part of this is that most of the people using our software don’t play games at all. The hand tracking is easier for non-gamers to pick up than controllers or mouse and keyboard.

Normally, audio is an essential part of making compelling gameplay. What’s it like to design a game where you can’t provide auditory cues to help people achieve tasks?

We do have to be careful with how we design the audio. We have to put a lot of work into hiding depth cues and on delivering certain parts of the game to only one eye or the other. The brain is very good at working around any deficits in sensory input. Blind people can learn to use echolocation to navigate. This means that we have to be careful on how we use sound so that we aren’t giving people enough information to continue to ignore their weaker eye.

What’s your plan for bringing Vivid Vision to as many people as possible?

Vivid Vision is now available in select eye clinics nationwide. You can visit our website to find a clinic in your area. We are also still planning on releasing a game for home use when the Oculus Rift CV1 comes out.

What’s involved with your current medical study, and what do you hope to discover?

We’re well underway on the study in collaboration with UCSF. Right now, we’re looking for patients in the San Francisco Bay Area to participate. (If you’re interested, you can contact us through our website or email contact@seevividly.com). We want to know exactly how effective our software is for different age groups with different kinds of lazy eye, what the optimal training regime is, and which techniques are the most effective for the different types of lazy eye.

What will be the most unexpected way that VR transforms the way we live, think, and experience the world?

I see VR as a way to finely and accurately control sensory input to the brain. Right now, VR is just visual and auditory input, but I think it will expand to the other senses as well in the coming years. Combined with good sensors, VR is a platform to provide stimulus to the brain and measure how it responds. By studying this feedback loop, we can start to design stimulus that changes how the brain works, to improve function. The more powerful the technology gets, the more powerful a platform it is for rewiring ourselves.

The post Can VR Change How Your Brain Works? appeared first on Leap Motion Blog.

The Goggles… They Do Something!

$
0
0

distance-in-vr

Distances in VR

From building 3D scenes to designing object interactions, depth and distance are an essential part of VR design. It’s also a delicate balancing act – between our natural instincts about the physical world, and the unique capabilities of the hardware.

Safety Goggles: 10 centimeters (4 inches). As human beings, we’ve evolved very strong fear responses to protect ourselves from objects flying at our eyes. Objects should never get too close to the viewer’s eyes, so you’ll want to create a shield that pushes all moveable objects away from the user. In Unity, for instance, one approach is to set the camera’s near clip plane to be roughly 10 cm out.

Optimal Tracking: 30 centimeters (12 inches)While the Leap Motion Controller can track more than 2 feet away, the “sweet spot” for tracking is roughly 1 foot from the device. This is the best range for reliable user interactions.

Object Rendering: 75 centimeters (30 inches). In the real world, your eyes are able to dynamically adjust depending on how near or far objects are in space. However, with headsets like the Oculus Rift, the user’s eye lenses remain focused at infinity. This means that objects can’t be comfortably rendered at close distances. The most comfortable rendering distance will depend on the optics of the VR headset being used. For example, the Oculus Rift DK2 recommends a minimum range of 75 cm. Since this is beyond the optimal Leap Motion tracking range, you’ll want to make interactive objects appear within reach, or respond to reach within that range.

Further Reading

 

The post The Goggles… They Do Something! appeared first on Leap Motion Blog.

Featured Platform: Get Creative with Vuo

$
0
0

One of the most powerful things about the Leap Motion platform is its ability to tie into just about any creative platform. That’s why we’ve just launched a Platform Integrations & Libraries showcase where you can discover the latest wrappers, plugins, and integrations.

Our first featured integration is Vuo, an extraordinarily flexible visual programming language for developers and designers. There are already 6 Vuo examples for on our Developer Gallery, which include Mac executables and project files – so you can download, import, and see how they all fit together. Recently, we caught up with Jaymie Strecker, one of the key developers on Team Vuo.

1. What’s your “elevator pitch” for Vuo as a creative platform?

vuoVuo is to help people make apps, videos, exhibits, and live performances. Vuo makes it possible for people who have a Leap Motion Controller, but don’t necessarily have programming experience, to build their own software for the device. I see huge potential for people using Leap Motion to be making their own public art installations and educational exhibits, plus games and stuff that they share with their friends.

As a developer myself, I should also point out that Vuo can help developers prototype experiences for the Leap Motion Developer Gallery. It’s designed to be easy enough to pick up for artists, musicians, and other folks who many not have programming experience. But it’s there for developers too, to quickly throw together a working prototype or even a full app.

My friends/coworkers and I use Vuo for VJing at live music shows, so we’ve really come to appreciate how Vuo lets you put together some pretty amazing interactive graphics on the fly. And how it’s easy enough to do so even if you’re distracted by loud music (or not quite awake)!

2. For developers and designers familiar with Quartz Composer, what are the key differences with Vuo?

Vuo was definitely influenced by Quartz Composer. In fact, one of Vuo’s developers had been working with QC as far back as 2005. We like a lot of the same things about QC as the people who are just recently discovering it through Origami, Avocado, and Form – the visual interface with patches and cables. But after adding onto QC for years, we’d accumulated so many ideas of how we could make things better that we decided to make a whole new product.

Vuo is really coming from a different mindset than QC. QC was originally designed for making graphics like screensavers. People have stretched it to the breaking point to do some incredible stuff. One of the challenges we constantly bump up against when using QC is that it wasn’t really made for things like audio processing and advanced 3D graphics, because everything centers around a layer-based graphics display. Vuo is lot more flexible. You can make the kind of graphics you would with QC if you want, but just as cleanly, you can target other media, like audio or real-world interfaces.

Vuo is also coming from a different era. QC predated modern shader-based graphics and the popularity of multi-threading, and it hasn’t really kept up with the times. Vuo gets past some of the limitations on QC’s graphics abilities and performance. Plus, Vuo is being actively developed by a team that’s in it for the long haul, so, as technologies get better, so will Vuo.

As a company, we’re pretty different from Apple (who owns QC). We’re a small indie company that’s constantly listening to and engaging with our user community. We love showcasing the cool stuff that Vuo users make.

I know if you’re used to QC or any other software, it’s always a challenge to get the hang of something new. But I gotta say, now that I use Vuo all the time, it’s hard to go back to QC. I miss all the stuff we added in Vuo, so QC ends up feeling kinda clunky.

3. What are the creative possibilities with Vuo’s various input and output interface nodes?

The idea is to be able to mix anything with anything. You can take input from any device that Vuo supports, including a Leap Motion Controller, and easily wire that up to control any output. In addition to Leap Motion, Vuo can do input/output with mouse, keyboard, Kinect, RSS feeds, video, audio, MIDI, OSC, Art-Net, and serial devices. We’re adding more inputs/outputs with each release.

This leads to some interesting combinations. Like, you could use a Leap Motion Controller to control stage lighting or an Arduino servo via your computer.

Vuo makes it easy to experiment or prototype with different devices. You can add Leap Motion Controller interaction to a composition without having to download the SDK or study the API. It’s the same idea for other devices. Very simple to get started. In fact, I’ll walk through an example right now (click to embiggen).

TholianWebLeap

This is the composition for Tholian Web, one of the new examples on the Leap Motion Developer Gallery. Vuo lets you color-code your nodes, so I did that to identify each part of the composition.

  • Blue (lower left): This brings in the stream of information from the Leap Motion Controller. The two pieces of information we’re interested in for this composition are whether a hand is within the device’s view and, if so, what is its palm velocity.
  • Cyan (across the top and right): This makes the graphics. First it makes a single grid, then it warps the grid according to the user’s hand movements, and finally it copies the single grid to create a stack of them.
  • Orange, yellow, and magenta: All these parts in between take the information about hand movements and translate that into movement and warping of the grid. The one magenta node, “Ripple Image,” is where the movement and warping really happens – its phase controls how fast the grid moves, and its amplitude controls the curviness of the grid. The yellow part changes the phase based on the palm velocity. The orange part transitions to a different curviness, based on the average palm velocity, when the user moves their hand out of the device’s view.

It’s all there, so you can take it in at a glance. This is the result:

tholian

4. What’s next for Vuo? Where do you see the Leap Motion integration evolving?

Vuo is very community-oriented, with users able to vote on the features they’d like to see in Vuo, so the community will have a lot of say in that.

We’ll definitely be adding more input and output interface nodes. For the Leap Motion Controller, we want to add support for gestures. Besides that, we’ll be improving Vuo’s ability to help people make UI prototypes, video effects, and captivating graphics.

We’re really excited to see and share the cool stuff that people make with Vuo and Leap Motion. We’re encouraging the Vuo community to post their creations to the Leap Motion Developer Gallery. I’m imagining all sorts of ways that Vuo and motion controls could be used for art, education, and entertainment, so it’ll be awesome to see what people create.

vuo-fluid vuo-gravity
vuo-poke vuo-julia

Vuo is available now with over 150 built-in example compositions. Learn more at vuo.org.

Fluid, Gravity Warp, and Julia Fractal Filter examples by George Toledo.

The post Featured Platform: Get Creative with Vuo appeared first on Leap Motion Blog.


Reach Into Jupiter’s Biggest Moon: A Massive Leap Motion Art Experience

$
0
0

Tomorrow in Montreal, audience members at the IX Symposium will see one of Jupiter’s moons appear inside a 60-foot dome. But this isn’t something you can find in a telescope – it’s a trippy virtual environment with stark geometric shapes and classical forms.

Ganymede is an immersive installation premiering tomorrow at IX 2015, an international conference based in Montreal, where it will be available for the next three weeks. Created by the artists and programmers at the Nature Graphique collective, it’s a major update to a formerly untitled Leap Motion art experiment that first appeared in 2013:

Ganymede is a really inspiring theme to work on, in which science, mythology and art converge,” says Mourad Bennacer, the sound designer on the project.

“It’s one of the moons of Jupiter, the biggest in the solar system, and has been named after a Trojan prince in Greek mythology. The experience we created allows you to explore an abstract and surrealist vision of the Galilean moon – a non-Euclidian environment populated by platonic solids and classical architectural elements.”

ganymede

Ganymede combines the Leap Motion Controller and a turntable as control devices to let people reshape objects, space, and sounds. Using simple hand movements, audience members are able to navigate, move objects, and trigger visual and audio effects.

“In immersive installations, the experience relies heavily on the fact that you become part of the scenography,” says Mourad. “In our installation, you can use the Leap Motion Controller to navigate, animate apparently static volumes, and then use them to reshape the space around you. To a giant scale.

“The way Leap Motion handles gesture sharpens the point of view, and opens up more possibilities for exploration and interaction.”

ganymede4

The team already has a lot more planned for the project, especially with the sheer scale of the SATosphere – the immersive dome where the installation lives. The room features a 360-degree spherical projection screen that’s 18 meters (60 feet) in diameter, with 8 video projectors and 157 speakers.

“Several upgrades to Ganymede are on their way, including more development on the Leap Motion platform to improve the controls, feeling of gravity, and object transformations. We’ve already added a turntable as a controller and we’re now thinking about implementing a third controller to make it a more collaborative experience.”

Credits for Ganymede: Aurélien Lafargue (creative director), Mourad Bennacer (sound designer), Pierre Gufflet (creative coding), and Julien Brisson (3D graphism)

Ganymede will be on display for the next three weeks at the Society for Arts and Technology in Montreal.

ganymede7

The post Reach Into Jupiter’s Biggest Moon: A Massive Leap Motion Art Experience appeared first on Leap Motion Blog.

An Alien Spaceship into the Twittersphere

$
0
0

The frustrating thing about raw Twitter data is that it tends to remove the very element that makes the platform so interesting in the first place: the nuance of human sentiment. But what if you could harness the power of that data back with your own two hands, set to music?

What began as a project exploring the correlation between architecture and sound at the University of Architecture in Venice, Italy morphed into something interactive when Electronic Music major Amerigo Piana took the reins. To finish out his thesis at the Music Conservatory of Vicenza, he decided to bring Leap Motion technology, sonic spatialization, and social data together under one dome.

“The structure is not a mere sound system, but a specific instrument with its own sounding board. Like the violin that has its own particular sound, Dodekaedros has a characteristic range of frequencies and timber.”

“I love playing with audio, often in the digital domain, from sound design to experimental electronic productions,” Amerigo told us. “I’m co-founder of a video-mapping firm called ZebraMapping in which we provide stage design, structures, and real time video. I enjoy mixing different medias together that are controlled and driven by humans.”

Amerigo had followed Leap Motion since before we launched, intrigued by the possibility of integrating organic 3D motion control into a sound spatialization project. As he began to build, he discovered the power and creativity involved in interaction design, ranging from small, natural hand movements to complex gestures newly imbued with meaning.

One of the five Platonic solids, the dodecahedron is formed from 12 pentagons.

One of the five Platonic solids, the dodecahedron is formed from 12 pentagons.

“I wanted to create an immersive situation to intrigue and charm people, letting the public play with sound spatialization using the Leap Motion Controller. I chose a dodecahedron because it gives the maximum analog output of an audio card: 10 (excluding the floor and the entrance).

“The 10 surfaces are useful for the exciters. The exciters are pushed against wood surface work like speakers, making the wood vibrate. Wood works like a bandpass filter which gives to sound a particular timber: no high frequency, no low frequency. The structure is not a mere sound system, but a specific instrument with its own sounding board. Like the violin that has its own particular sound, Dodekaedros has a characteristic range of frequencies and timber.”

From afar, the user sees a geometric structure with red light spilling out. The spinning soundscapes are also audible from the outside. Once inside the dome, the user is prompted to wave their hand over the Leap  Motion Controller. While the user’s right hand controls the sound position and LEDs, their left hand modifies the sound synthesis parameters.

The environment is controlled by a MaxMSP program that Amerigo developed. It creates sound, manages spatialization, reads Leap Motion data, then remaps it into sound synthesis and LED lights. The Twitter queries are written in Ruby, and the answers are then interpreted by MaxMSP which handles speech synthesis. When a specific set of hashtags are posted on Twitter, the structure gives the user speech feedback using the Ruby query.

Ultimately, Amerigo hopes that people walk away from Dodekaedros with a fresh perspective of our rapidly converging digital and physical universes. “The field of interaction and implementation of the Internet of Things creates a personal communication channel between machines and humans.”

The post An Alien Spaceship into the Twittersphere appeared first on Leap Motion Blog.

Kolor’s VR Movies Let You Paraglide with Batman (Seriously)

$
0
0

Hot-air balloons floating hundreds of feet above the ground. A fighter jet soaring through the sky. A rock concert in Rio. And, of course, a paraglider dressed up as Batman, complete with Batmobile. These are just some of the videos that you can dive into with Kolor’s 360° video player Kolor Eyes, now featured on the Leap Motion Developer Gallery.

Kolor’s platform works by stitching together panoramic videos into massive virtual experiences that let you look anywhere within the scene. Kolor Eyes, their video player, is available free from the Kolor website with VR and desktop modes. The company was acquired earlier this year by GoPro, accelerating their mission to bring truly panoramic video to the world.

In March, Kolor and Intel launched a music video with Belgian singer Noa Neal, which quickly went viral with over 400,000 views on YouTube. To explore the video in 360 degrees, you’ll need to watch the video in Google Chrome. Better yet, download the HD videos for Kolor Eyes, which includes VR support for the Oculus Rift.

Kolor Eyes also features Leap Motion support, with several universal gesture interactions, and some special ones for desktop and VR modes:

Universal Gestures

  • Play & Pause. Clap your hands once.
  • Fast Forward & Rewind. Twirl your index finger in a circle – clockwise to jump forward, counter-clockwise to jump back. The wider the circle, the further you’ll jump.

Desktop Gestures

  • Camera Control. Point at any border of the screen to turn the camera.
  • Zoom. Point both of your index fingers at the screen. Moving your fingers away from each other will zoom in on the scene, while bringing them closer together will zoom out.
  • Little Planet Projection (see below). With both hands out and palms down, curve your hands so that the palms face each other. This will warp the world and curve the horizon. You can also turn the little planet, or change your pitch.

planet1_large
An example of the “little planet” projection effect. Image courtesy of subblue.

VR Gestures

  • Next video: Swipe with your hand to move to the next video,if you’ve loaded multiple videos.

Kolor’s documentation wiki has more details about these interactions, including some helpful GIFs. Check out their guide to setting up the Oculus Rift, download HD videos from their video gallery, and grab some popcorn!

The post Kolor’s VR Movies Let You Paraglide with Batman (Seriously) appeared first on Leap Motion Blog.

Twist the Gears of a Massive VR Music Engine with Carillon

$
0
0

When virtual reality and musical interface design collide, entire universes can be musical instruments. Created by artists Rob Hamilton and Chris Platz for the Stanford Laptop Orchestra, Carillon is a networked VR instrument that brings you inside a massive virtual bell tower. By reaching into the inner workings and playing with the gears, you can create hauntingly beautiful, complex music.

The Orchestra recently performed Carillon live onstage at Stanford’s Bing Concert Hall with a multiplayer build of the experience. Now the demo is available on Windows for everyone to try (after a brief setup process).

TRY THE DEMO

This week, we caught up with Rob and Chris to talk about the inspiration behind Carillon and the bleeding edge of digital music.

slork-bing-800

What’s the idea behind the Stanford Laptop Orchestra?

Rob: The Stanford Laptop Orchestra (SLOrk) is really two things in one. On the one hand, it’s a live performance ensemble that uses technology to explore new ways to create and perform music. On the other, it’s a course taught every Spring quarter at Stanford’s Center for Computer Research in Music and Acoustics (CCRMA, pronounced “karma”) by Dr. Ge Wang.

In order for students to think about how to create pieces of music and pieces of software, we enable groups to get together and make music as an ensemble. There’s a lot to learn – you have to learn how to compose, orchestrate music for an ensemble, interaction design, etc. Students in the class make up the ensemble and in the span of 9-10 weeks learn how to program, compose music and build innovative interactive performance works.

Sonically, one of the chief ideas behind SLOrk is to move away from a “stereo” mindset for electronic and electroacoustic music performance – where sound is fed to a set of speakers flanking the stage – and instead to embrace a distributed sound experience, where each performer’s sound comes from their own physical location. Just like in a traditional orchestra, where sound emanates from each performer’s instrument, SLOrk performers play sound from custom-built 6-channel hemispherical speakers that sit at their feet.

What does VR add to the musician’s repertoire?

Rob: Computer interfaces for creating and performing music have come a long way from early technologies like Max Mathews’ GROOVE system. Touchscreens, accelerometers, and a bevy of easy-to-use external controllers have given performers many options in how to control and shape sound and music. The addition of VR to that arsenal has been extremely eye-opening, as the ability to build virtual instruments and experiences that leverage depth and space – literally allowing musicians to move inside a virtual instrument and interact with it while retaining a sense of place – plays off of musicians learned abilities to coax sound out of objects.

Using technology like the Leap Motion Controller allows us to use our hands to directly engage virtual instruments, from the plucking of a virtual string to the casting of a sonic fireball. The impossibilities of VR are extremely exciting as anything in virtual space can be controlled by the performer and used to create sound and music.

In some ways, though, this is an ongoing trend. The people who created instruments were always trying to create something new. Instruments like the saxophone didn’t exist throughout all time – someone had to create them. They came up with these new ways to control sound. The whole idea behind our department is that we want to put technology in the hands of musicians and show composers how they can use technology to further the science and artistry of music.

Where will the next “saxophone” come from?

Rob: I think the next saxophone could exist within VR. One thing that’s really exciting about VR that it takes us from engaging technology on a flat plane, and brings it into three dimensions in a way that we as humans are very used to. We’re used to interacting with objects, we’re used to using our hands, we’re used to looking at objects from different angles.

As soon as you strap on a pair of VR goggles and put your hands into the scene, all of a sudden things change. You really feel a strong connection with these virtual objects, which our brains know don’t really exist, but they definitely exist in this shared mental space – in the matrix, or Metaverse from Snow Crash. The more we bring those closer to the paradigms that we humans are used to in the real world, the more real these objects feel to us.

carillon-anatomy

How do the different parts of the Carillon fit together?

Rob: Carillon was built within the Unreal Engine and uses the Open Sound Control protocol to connect motion and actions in the game environment with a music engine built within the Pure Data computer music system. Carillon was really built for VR: the Oculus Rift and the Leap Motion Controller are key components in the work. The Leap Motion integration to Unreal uses getnamo’s most excellent event-driven plugin to expose the Leap Motion API to Unreal Blueprint and slave each performer’s avatar arms and hands to the controller. All the programming for Carillon was done using Unreal’s Blueprint visual scripting language. The environment is built to be performed across a network, with multiple performers all controlling aspects of the same Carillon.

The experience of reaching out into our interface while immersed in the Rift and tracking our avatar arms is really magical and compelling.

The core interaction in Carillon is the control of a set of spinning gears at the center of the Carillon itself. By interacting with a set of gears floating in their rendered HUD – grabbing, swiping, etc. – performers speed up, slow down, and rotate each set of rings in three dimensions. The speed and motion of the gears is used to drive musical sounds in Pure Data, turning the virtual physical interactions made by the performers into musical gestures. Each performer generates sound from their own machine, and in concert with the SLOrk, that sound is sent to a six-channel hemispherical speaker sitting at their feet. Additional sounds made by the Carillon including bells and machine-like sounds are also sent to SLOrk speakers.

For this performance, we ran three instances of the Carillon across the network using fairly beefy gaming laptops with 2 or 3GB video cards. However, to get optimal framerates for the Oculus Rift, this work really should be performed using faster video cards. Nonetheless, the tracking of the Leap Motion Controller has been really nice for us, as the gestures remain intact even at these lowered framerates.

What was the design and testing process like?

Rob: Chris and I have been designing and building iterations of the Carillon for a while now as the work is actually one piece in a large work that is also made up of earlier works of ours: ECHO::Canyon (2014) and Tele-harmonium (2010). The idea of a mechanical instrument that responds to performers’ motions has been something we really enjoy and have been trying to explore different ways to make human gestures and machine-driven virtual interactions “make sense” for both performers/players and audiences.

Testing with the Oculus Rift and the Leap Motion Controller within Unreal has been pretty fun. Where the DK1 was pretty painful to develop for (physically painful), the DK2 is much more forgiving of the lower framerates we’re using on laptops. The Leap Motion integration has been extremely fun, especially in HMD mode where the fun really starts. The experience of reaching out into our interface while immersed in the Rift and tracking our avatar arms is really magical and compelling.

Chris: On the graphics side, most game designers (especially world builders) get greedy in making this large-scale area. We always really like to push that, and I think it paid off because we know how large we can go. One of the most exciting things about stereoscopy is really feeling like you’re standing on the edge of a 200-foot object rather than this little TV screen on this little box. I think I’m going to repurpose some of these environments and have some smaller playable areas that will perform really well.

Carillon

What’s the experience like for the person inside the belltower?

Rob: Each performer’s avatar stands on a platform within the Carillon, looking up at the giant spinning rings in the center. Directly in front of them are a smaller set of rings with which they can interact – holding a hand over a ring or set of rings selects them, swiping gestures left/right, up/down, away/toward start the rings spinning. That spinning on each axis controls sound from the Pure Data patch. Across the network, each performer is working with the same set of rings, so collaboration is needed to create something truly musical, just like in traditional performance. Under the performer’s feet are a set of bells being struck by the Carillon itself, playing a melody that shapes the piece.

The Leap Motion Controller was a key component to this piece. Being able to reach out and select a ring, then to “physically” start it spinning with your hand is a visceral experience with which simply tapping a key on the keyboard or pressing a button on a gamepad can’t compare. The controller really creates a sense of embodiment with the environment, as your own physical motion really drives the machine and the music it creates.

How did the audience react to the performance?

Rob: People are really just starting to understand what kinds of things these great pieces of technology like the Leap can allow artists and creatives to build. Most people in the audience are really curious to know what’s going on beneath the Rifts; what the performers are really doing and how the whole system works. They hear what we’re doing and see how the Carillon itself is moving but they want to know how it works. The image of us onstage swiping our hands back and forth isn’t what most concert-goers are used to or were really expecting when they came to the show.

How will people be creating and experiencing music in 2020?

Rob: We’re at such a great time to be building creative musical technology-based experiences. We can move sound across networks bringing together performers from other sides of the world in real time, interacting with one another in virtual space. In five years I think the steady march of technology will continue. As we all become more comfortable with technology in our daily lives, it will seem less and less strange to see it blending with traditional musical performance practice, instrument design and concert performance. The idea of mobile musical instruments is really powerful, and you can flip that into more traditional video gaming terms, where the ability for these networked spaces to connect people is phenomenal.

carillon2

About the Creators

Rob Hamilton is a composer and researcher who spends his time obsessing about the intersections between interactive media, virtual reality, music and games. As a creative systems designer and software developer, he’s explored massively-multiplayer networked mobile music systems at Smule, received his PhD in Computer-based Music Theory and Acoustics from Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA) and this fall will join the faculty at Rensselaer Polytechnic Institute (RPI) in New York as an Assistant Professor of Music and Media.

Chris Platz is a virtual world builder, game designer, entrepreneur, and artist who creates interactive multimedia experiences with both traditional tabletop and computer-based game systems. He has worked in the industry with innovators Smule and Zynga, and created his own games for the iOS, Facebook, and Origins Game Fair. His real claim to fame is making interactive stories and worlds for Dungeons and Dragons for over 30 years. He holds a BA in Business & Biotechnology Management from Menlo College, and an MFA in Computer Animation from Art Institute of CA San Francisco. From 2007-2010 Chris served as an Artist in Residence at Stanford University in Computer Graphics and he is currently an Assistant Professor of Animation at California College of the Arts.

Together Rob and Chris teach the CCRMA “Designing Musical Games::Gaming Musical Design” Summer workshop at Stanford, where students learn how to explore cutting edge techniques for building interactive sound and music systems for games and 2D/3D rendered environments. To better understand the link between virtual space and sound, students learn the basics of 3D art and modelling, game programming, interactive sound synthesis and computer networking using Open Sound Control. Learn more about the Designing Musical Games Summer Workshop at https://ccrma.stanford.edu/workshops/designingmusicalgames2015.

The post Twist the Gears of a Massive VR Music Engine with <i>Carillon</i> appeared first on Leap Motion Blog.

Featured Platform: Building the Metaverse from Today’s Web with JanusVR

$
0
0

janusvr-logoThe prospect of existing within the Internet is a concept straight from science fiction, but one we’ve been helping to build for some time. Simply gazing upon the Internet is starting to look a bit ‘90s – but how do you go about constructing a digital universe where 2D and 3D content can coexist in a way that is both seamless and satisfying?

Meet JanusVR, a team of two VR veterans looking to reshape the way you connect to the world and its infinite digital content. Inspired in part by Snow Crash’s Metaverse, James McCrae and Karan Singh sought to build a network of VR portals within which users can collaborate, communicate, explore, and even create new 3D content.

janusvr-features

“I got into VR back in 1995,” Karan told us. “My PhD was on humans and virtual environments. But that was a different and earlier wave, and at that time people were using things like cyber-gloves, magnetic motion trackers, giant helmets with stereoscopic displays… just kind of attached to them. It was a very different genre of equipment. This current wave is a whole lot more promising. Things like resolution, latency, accuracy – all of these are important for being able to give people a seamless suspension of disbelief.”

After a brief hiatus from VR working as a principal architect on the character animation tools that now populate Maya, Karan returned to the field to research interactive computer graphics at the University of Toronto. It was there he met James McCrae, and the pair began making JanusVR a reality.

About a year ago, when the project was just starting to take off, Karan and James decided that JanusVR needed a multi-device input solution. It was then they began prototyping navigation and locomotion UI paradigms with Leap Motion.

janusvr-office

“Our current Leap Motion interaction design works mostly with the spatial tracking, mostly with the hand remaining open or closed, limiting the variations in dexterity that we’re using at this point.” The team plans to continue building out Leap Motion gestures and interaction schemes for the project. While the platform as a whole is still a work in progress, you can already explore the entire web, or create and play with 3D content built from (incredibly) simple HTML.

“What we are currently is a collision course of browsing, social, collaboration, and multidimensionality,” Karan said, “If you go into Janus today, it’s not about 2D or 3D or any ‘D.’ All of that co-exists. At the moment we are very focused on an immersive experience.  In a reimagined world, you expect that [web applications] could sort of homogeneously co-exist, and leverage off each other to give you a richer, more seamless experience. That’s where we’re headed.”

To keep up with the latest iterations and voice your suggestions, follow JanusVR on Twitter or join the subreddit. Be sure to check out the project site for the latest updates.

janusvr-teleportation

janusvr-menu

The post Featured Platform: Building the Metaverse from Today’s Web with JanusVR appeared first on Leap Motion Blog.

Viewing all 481 articles
Browse latest View live