• Pages

  • Recent Posts

  • Recent Comments

  • Archives

  • Mon 31 Mar 2008

    Intuition – VR State of the Art

    Published at 17:52   Category VR Devices, VR Displays, Virtual Reality  

    Intuition, the European VR Network of Excellence and soon to be European VR Association, has produced a nice VR State of the art back in 2004. It’s pretty complete with over 100 pages! Happy reading!

    Thu 27 Mar 2008

    Searis 2008

    Published at 12:08   Category Augmented Reality, VR Applications, Virtual Reality  

    On March 9th I attended the Searis 2008 Workshop: Software Engineering and Architectures for Realtime Interactive Systems, which was the perfect workshop for me ;)

    svgallery=searis2008

    The workshop was about presenting existing architectures and regroup people working on that particular topic, learn from existing projects so that people don’t reinvent the wheel everytime, learn from others and try for find some formalization.

    Anthony Steed pointed out that having a platform running for many years is very hard and that it has to be rewritten once in a while. Architecture gets cluttered and needs a fresh restart. Nonetheless, the DIVE platform still allows him to run demos that are several years old, which is probably not the case on many systems. Many platforms break compatibility when upgrading (or license expire for commercial products).

    We had the presentation of a lot of middleware architectures (InTml, Lightning, FlowVR, OpenMask, ViSTA VR, Morgan) with lots of common ground such as data flow, abstraction of devices/interaction techniques/renderer, easy cluster distribution, portability.

    And there is a lot of wheel reinvention on all this common ground, but at least people are not reinventing scenegraphs (well not everyone but for good reasons, see below). Scenegraph libraries are widely used, with a lot of OpenSG, OpenSceneGraph and Ogre3D. It seems it might be a good idea to create a meta scenegraph library (if it was at all possible ;) , as every engine is switching from one scenegraph lib to another at some point.

    There is a new trend towards interaction techniques abstraction, following the devices abstraction.

    There is also a trend of using handheld devices with limited rendering capabilities, like smartphones, PDA, mainly for Augmented Reality. Not so many platforms/toolkits support those devices (which is why Morgan created their own SceneGraph that supports this feature).

    An interesting approach for manipulating heavy graphics data is to use multi-frame rate rendering. The problem is that if your data can only be displayed at very low frame-rates, your interaction will also be that slow. But you can have one fast rendering loop for user interaction, and one slow rendering loop to display the graphics data. The two images would be created on two different graphics card or PCs, then digitally composed to create the final image.

    Research in the academics are being conducted to get some higher level frameworks. For example, engines that use functional reactive programming, or the actor model on abstract semantic level. I’m sorry but I didn’t quite understand what it was all about, a bit too academic programming for me.

    Some other high level interesting additions are semantic description of a world/application/interaction and the use of dual state machine/data flow approach. Semantic description using web semantic tools were discussed several times.

    If you’re interested in joining the group, Raimund Dachselt will give you the details of the next steps.

    Wed 26 Mar 2008

    3DUI 2008

    Published at 18:25   Category VR Applications, VR Devices, VR Displays, Virtual Reality  

    On saturday March 8th I attended the 3DUI Conference which was really interesting.

    svgallery=3dui2008

    It seems the trend is to simplify 3D interactions by adding some real life constraints.

    A lot of talk was about using 2D or constrained interactions inside a 3D world, because real life interactions are mainly 2D and so, our brain is mainly wired for 2D interactions. Only aircraft and helicopter pilots are really doing 3D interactions.

    M. Stuerzlinger talked about his idea of Smart 3D; there should be no floating objects (there aren’t in the real world), no object interpenetration. You should only be able to select and interact with objects that are visible or in front. People in real life would move to interact with hidden objects. He also argues that perspective and occlusion are the strongest depth cues so you wouldn’t need stereo if you assume non floating objects. He also notes that interactions are mainly 2D. Moreover, 2D input devices (mouse) have much higher precision than any 3/6 Dof input device (10 to 100 times!), and this seems to be a major point in achieving precise interactions. An interesting result was also that having a surface support (like a table when using a mouse) didn’t improve interaction precision as much as device resolution does.

    Doug Bowman also states that we don’t see much complex 3d interactions in VR applications. Apparently stereo rendering doesn’t improve efficiency, but the choice of the interaction technique does.

    So the question raised was, do we have to mimic the real world? Can’t we just break the rules?

    It depends on the task, but if you want your knowledge to be transferable to the real world, having real world rules helps.

    Following that trend, Withindows “is a theoretical framework for developing user interfaces that can operate in both dekstop and full 3D immersion without redevelopment.” It will constrain your interaction to a 2D plane but should improve the interaction efficiency.

     

    I also like the 3d drawing technique presented by M. Keefe, called “Dynamic draggin for input of 3D Trajectories”.

    There were a lot of very interesting posters. Like a technique to augment the size of a viewport inside a HMD simply by sliding the front plane of the camera frustum to follow the dominant hand, by Andrei Sherstyuk, Dale Vincent from University of Hawaii and Caroline Jay from University of Manchester.

    My favorite poster was a technique by Franck Steinicke from the Muenster University where you wouldn’t map exactly the real walking of someone to the real world. For example, you would have to walk in circle in the real life to walk in straight line in the virtual world. You could also have the user thinks he walked 20m when is just walked 10m. Why would you want to do such a strange mapping? This translation and rotation compression would be a really good way to use a small physical space and have it seem like a huge virtual space.

    EDIT: Tabitha Peck along with Mary Whitton and Henry Fuchs presented in the full VR session a very interesting paper  (“Evaluation of Reorientation techniques for Walking in Large Environment”) that uses a reorientation technique (ROTs) based on distractors. From the abstract : [we use] distractors – objects in the VR for the user to focus on [like a butterfly] while the VE rotates. The reorientation technique is also “used to lift the size constraint, (…) when a user is close to walking out of the tracked space.”

    There was also a very interesting new display; a box 3d display by Roberto Lopez-Gulliver and Shunsuke Yoshida from the NICT :

    YouTube Preview Image

    And an impressive freeform projection display from Daisuki Kondo and Ryuogo Kijima from the Virtual System Lab of the Gifu University. This technique allows you to project a correct image on any surface (whose 3d mesh you have to know in advance).

    YouTube Preview Image

    This technique allows you to project a correct image on any surface (whose 3d mesh you have to know in advance).

     

    I’d love to provide links and more information to the papers/devices I mentionned here, so if you’re one of the authors and want to share it, feel free to tell me!

    Thu 6 Mar 2008

    IEEE VR 2008

    Published at 11:24   Category Virtual Reality  


    I’m leaving tomorrow to attend IEEE VR 2008 and 3DUI 2008.

    If you’re there too, let me know so that we can meet there!

    Virtools will have a booth, or I’ll be asking questions at the conferences =)

    Thanks to David for enabling the trip ;)

    Thu 6 Mar 2008

    Maglev Haptics

    Published at 11:08   Category VR Devices  

    This article at NewsScientists talks about a new haptics device “levitated by magnets”.

    YouTube Preview Image

    Ralph Hollis and colleagues at Carnegie Mellon University, Pittsburgh, US, developed a haptic device with just one moving part. (…)

    A bowl with electromagnets concealed below its base contains a levitating bar that is grasped by a user and can be moved in any direction. The magnets exert forces on the bar to simulate the resistance of a weight, or a surface’s resistance or friction. LEDs on the bar’s underside feed back its position to light sensors in the bowl.

    This approach has “huge potential”, says Anthony Steed, a haptics researcher at University College London, UK. “This system gets rid of the mechanical linkages that are a major constraint on most haptic devices.”

    The maglev interface can exert enough force to make objects feel reassuringly solid, says Hollis, resisting as much as 40 newtons of force before it shifts even a millimetre.

    That’s enough to feel the same as a hard surface and better than most existing interfaces, he says. “Current devices feel very mushy, so it’s hard to simulate a hard surface.”

    The device can track movements of the bar as small as two microns, a fiftieth the width of a human hair. “That’s important for feeling very subtle effects of friction and texture,” says Hollis.

    And it can exert and respond to all six degrees of freedom of movement – moving along or rotating about each of the three dimensions of space.”It offers things that other devices just can’t do – the high forces, low friction, low inertia, and six degrees of freedom.”

    After working on a series of prototypes since 1997, Hollis has started a company called Butterfly Haptics to market the technology.