On saturday March 8th I attended the 3DUI Conference which was really interesting.
It seems the trend is to simplify 3D interactions by adding some real life constraints.
A lot of talk was about using 2D or constrained interactions inside a 3D world, because real life interactions are mainly 2D and so, our brain is mainly wired for 2D interactions. Only aircraft and helicopter pilots are really doing 3D interactions.
M. Stuerzlinger talked about his idea of Smart 3D; there should be no floating objects (there aren’t in the real world), no object interpenetration. You should only be able to select and interact with objects that are visible or in front. People in real life would move to interact with hidden objects. He also argues that perspective and occlusion are the strongest depth cues so you wouldn’t need stereo if you assume non floating objects. He also notes that interactions are mainly 2D. Moreover, 2D input devices (mouse) have much higher precision than any 3/6 Dof input device (10 to 100 times!), and this seems to be a major point in achieving precise interactions. An interesting result was also that having a surface support (like a table when using a mouse) didn’t improve interaction precision as much as device resolution does.
Doug Bowman also states that we don’t see much complex 3d interactions in VR applications. Apparently stereo rendering doesn’t improve efficiency, but the choice of the interaction technique does.
So the question raised was, do we have to mimic the real world? Can’t we just break the rules?
It depends on the task, but if you want your knowledge to be transferable to the real world, having real world rules helps.
Following that trend, Withindows “is a theoretical framework for developing user interfaces that can operate in both dekstop and full 3D immersion without redevelopment.” It will constrain your interaction to a 2D plane but should improve the interaction efficiency.
I also like the 3d drawing technique presented by M. Keefe, called “Dynamic draggin for input of 3D Trajectories”.
There were a lot of very interesting posters. Like a technique to augment the size of a viewport inside a HMD simply by sliding the front plane of the camera frustum to follow the dominant hand, by Andrei Sherstyuk, Dale Vincent from University of Hawaii and Caroline Jay from University of Manchester.
My favorite poster was a technique by Franck Steinicke from the Muenster University where you wouldn’t map exactly the real walking of someone to the real world. For example, you would have to walk in circle in the real life to walk in straight line in the virtual world. You could also have the user thinks he walked 20m when is just walked 10m. Why would you want to do such a strange mapping? This translation and rotation compression would be a really good way to use a small physical space and have it seem like a huge virtual space.
EDIT: Tabitha Peck along with Mary Whitton and Henry Fuchs presented in the full VR session a very interesting paper (“Evaluation of Reorientation techniques for Walking in Large Environment”) that uses a reorientation technique (ROTs) based on distractors. From the abstract : [we use] distractors – objects in the VR for the user to focus on [like a butterfly] while the VE rotates. The reorientation technique is also “used to lift the size constraint, (…) when a user is close to walking out of the tracked space.”
And an impressive freeform projection display from Daisuki Kondo and Ryuogo Kijima from the Virtual System Lab of the Gifu University. This technique allows you to project a correct image on any surface (whose 3d mesh you have to know in advance).
This technique allows you to project a correct image on any surface (whose 3d mesh you have to know in advance).
I’d love to provide links and more information to the papers/devices I mentionned here, so if you’re one of the authors and want to share it, feel free to tell me!