• Pages

  • Recent Posts

  • Recent Comments

  • Archives

  • Sat 11 Sep 2010

    Sony PSMove – First impressions

    Published at 12:03   Category Game, Game development, Product Review, VR Devices, Virtual Reality DIY  

    I’ve been interviewed by the french tech magazine Amusement (October edition) about the new 3D controller by Sony, the PSMove.

    Sony liked the interview and decided to put it in the (french) press kit:

    SONY_Couverture

    Part1

    Part2

    The interview was done after testing the basic sports games, and my initial reaction was very good. I had to unlearn the bad habits of Wii Sports where you just had to shake the wiimote at the right time to catch the tennis ball. Now you have to really move ! Come closer, move your hand to realistic positions and orient it in a useful way. The controller reacts like a real 6dof tracker as long as the camera sees the glowing ball.

    They really improved perceptive immersion by having you do basic moves like actually doing the movement for taking an arrow before being able to shoot it. Like in Heavy Rain where you have to do all sort of seemingly useless movements, those movements in fact increase the Place Illusion. The more ‘realisitic’ movements you do that have a believable impact on the world, the more presence !

    Heavy Rain will have a new edition based on the PSMove and I’m eager to test that further. My first impression was mixed since I didn’t have the chance to run the tutorial and didn’t understand all the moves..

    I also made the point that future hardcore gamers will have to improve their physical abilities to be better at games.

    Then I just said that I’d love to play Splinter Cell and Call of Duty in full VR, and that while interesting, the PSMove is only one step towards real IVR at home.

    psmove_gig_small

    hr

    psmove

    Then this week I attended Sony’s big gig for the release of the PSMove, in a three story loft with view on the Eiffel tower !

    There I have quickly tested Heavy Rain, Time Crysis and I’ve seen a gameplay video of Killzone 3 using the PSMove.

    I fear the aiming will not be as good as I hoped for. What I’ve seen is that when the controller is used for aiming, there is 1/some (small but annoying) lag, and 2/ some (annoying) jittering. 1/ is probably due to heavy filtering, and 2/ would probably be worse without filtering. There was less jittering in the archer game, but the small lag was still there.

    When you think about it, it seems obvious : the optical tracking of the glowing ball alone cannot give you orientation information. The optical tracking of the wiimote, when used for aiming, seems faster and more precise, because the camera can see the slightest movement of the targeted LEDs. The orientation information of the PSMove is only given by the inertial sensors, which as you know are fast but not very precise. So I fear the aiming is mostly based on filtered inertial data.

    It should be noted that the lighting environment at the gig was probably not the best, wish flashy lights (some matching the color of the glowing ball…), which could also explain why sometimes the tracking got lost.

    So of course all this requires more testing and I’ll be in the starting blocks next week to buy that and play work with that !

    Fri 21 Nov 2008

    Chicken stabilization

    Published at 0:13   Category Tech  

    Saw that on Johnny Lee’s blog. Sorry but just… LOOL

    YouTube Preview Image
    Thu 22 May 2008

    Virtusphere review

    Published at 13:40   Category Product Review, VR Devices  

    A couple of months ago I had the opportunity to test the Virtusphere for two days and could since watch several beginners try this device.

    The Virtusphere is a 2.6m sphere of 120kg made of ABS plastic, lying on wheels, with an incredibly sophisticated movement detection device below (a mouse!), used as a virtual reality locomotion device.

    You enter the sphere by a small hatch, and are instructed to take small steps first. So a small step you make, and the sphere starts to roll, and you make another step to keep balance, and .. you’re walking! During the first session you might even be able to run, and a lot of people did! Especially girls who generally perform better than guys.

    Ray Latypov, the inventor behind the Virtusphere along with his brother, started programming games 15 years ago in Russia. He made enough money out of his games to finance the first prototypes of the Virtusphere.

    Ray Latypov and the Virtusphere

    But why build a sphere?

    “Once you know the task, walking, the device is easy to create!”, Ray says.

    The Virtusphere doesn’t have any active part, there is no motor. The sphere movement only comes from the steps of the user. The sphere lies on wheels :

    Ray says this is an advantage over the treadmills because the motors induce some latency.. This is a point I’d like to verify myself.

    But this advantage has a drawback: as the sphere has a bit of inertia, you have to learn to start the movement correctly, and more importantly, learn of to stop the movement. This is not completely natural and induces instability that has to be managed by the user. This is particularly a problem for tall people ( >= 1.80m ) that are much less at ease.

    Moreover, due to the size of the sphere, the walking area is not really planar; this forces you to slightly modify the way you walk. A bigger type of sphere with 3m diameter exists that would reduce this issue.

    This inertia and the not so flat ground makes it an unnatural walking, this is why you may not want to use the sphere for studies on real-life walking. The CyberWalk project built an omni-directional treadmill to study the natural walking.

    You would think that you’d get claustrophobic, but thanks to the design you can see outside quite well provided that the ball has a minimum speed.


    This is also an advantage when you wear and HMD. As the lateral vision plays an important role in balance, seeing the real world “horizon” helps a lot in staying on your feet. When you wear an HMD in the sphere your balance is affected, meaning you have more chances of falling.

     

    For the moment only the head orientation is tracked. We believe that tracking the head position (not just orientation) accurately will greatly improve balance.

    By the end of the two days, we had a nice virtual environment running, and a stereo wall just next to the sphere displaying the view of the user.

    Conclusion

    It’s a fun locomotion device, that really catches the eye as a futuristic VR device. It may not be a natural walking, but the military are using it as a training device where the user can jump and roll, so choosing between this and an omni-directional treadmill (ODT) mainly depends on your application … and your budget; the Virtusphere is around 50’000$, whereas the ODT is 20 times more for now.


    YouTube Preview Image

    Fri 25 Jan 2008

    Visual Studio 6 vs 2005 allocators

    Published at 17:43   Category C++  

    (If you didn’t sleep enough, don’t try to understand the following)

    If one day you’re in Visual Studio 2005, creating an object that comes from a DLL compiled with Visual Studio 6, you might run into some allocators issues. Normally things go smoothly because your object is allocated and deallocated with the VC2005 allocator if you create it yourself, or by the VC6 allocator if you’re using specific creation functions that come with your DLL.

    What can become very tricky is when this object has a virtual destructor, which means that your object was allocated with the 2005 allocator, but is deallocated with the VC6 allocator. This is not good, and will probably result in a crash nearly impossible to solve unless the above-mentionned DLL is re-built for 2005. Which may not always be possible if you the owner of the DLL doesn’t want to do it himself or give his source code.

    Here’s a solution (I’m really happy we have code gurus here..) : allocate the memory for this object using the VC6 allocator. How do you do that? Easy (aha)! Simply load the VC6 runtime dll (msvcrt), find the pointer to the allocator function, and call it !

    vc6handle = LoadLibrary(TEXT(“msvcrt”));

    typedef void* (*vc6_malloc) (size_t _Size);
    vc6_malloc vc6funcptr = (vc6_malloc) GetProcAddress(vc6handle,”malloc”);
    TheClass* toto = (TheClass*) vc6funcptr(sizeof(*TheClass));
    toto = new (toto) TheClass;

    FreeLibrary(vc6handle);

    You’ll notice that the new function takes the buffer that was allocated as a parameter. This is not the regular C++ allocator, but a custom one that can take a previously allocated buffer.

    I won’t even explained how they did understand the problem …

    Any question? =)

    Fri 11 Jan 2008

    Stereoscopic phone

    Published at 17:16   Category Tech, VR Displays  

    Nec is introducing the N704iµ, reportedly having a stereoscopic screen. I can’t find any more information than this picture, so if you come across something interesting, please leave a comment =)

    Thu 10 Jan 2008

    Intel may put GPU into CPU and develop realtime raytracing

    Published at 13:45   Category 3d, Tech  

    I found on Dom’s blog this Ars Technica article talking about the research of Intel to create a “many-cores” architecture that could include a GPU :

    From what has been revealed so far, “Larrabee” appears to be the codename for a so-called “many-core” [>8 cores] architecture that will include a variety of actual products and implementations. (…)

    Larrabee parts will have ten or more in-order cores per die. Each of these cores (…) will support up to four simultaneous threads of execution.

    (…)

    So while Intel won’t publicly talk about any actual products that will arise from the project, it’s clear that a GPU aimed at real-time 3D rendering for games will be among the first public fruits of Larrabee, with non-graphics products following later.

    (…)

    Daniel Pohl (c) The Inquirer

    Intel employee Daniel Pohl was on hand at last week’s IDF to give demonstrations of a version of Quake 4 that uses real-time ray tracing running on Intel hardware. Charlie at the Inquirer managed to catch the demo, and he published an account of it this morning that attempts to get at where the GPU is eventually headed as a product.

    (…) there’s no doubt that Intel is serious about bringing this technology to real-time 3D games.

    [But] because of the demands of the ray-tracing engine, Pohl had to swap out many of the game’s detailed textures in favor of reflective surfaces that exploit ray tracing, because there was no horsepower left over for texturing. It’s also the case that the demo required four quad-core machines ganged together, and even then it didn’t run at a playable framerate. So those reflective and refractive effects are nice, but they’re not worth that kind of horsepower requirement, especially when you compare the overall look of the resulting game to what single G80 can do with the right coding.

    (…) ditching rasterization for some kind of pure ray-tracing approach is such a giant step backwards in terms of performance that when you factor in the fact that you’re forced to trade programmable shaders for a limited palette of flashy reflective/refractive effects, then sticking with rasterization is a no-brainer.

    (…)

    [Larrabee would seem to be] an ideal architecture on which to run a much more sensibly designed (higher quality, and faster) *hybrid* engine, that traces rays on-demand for secondary effects, while using Z-buffer, REYES, or similar pipelines for primary visibility.

    So we won’t get realtime Renderman today :-/

    Fri 21 Dec 2007

    Future Crew – Second Reality

    Published at 17:40   Category 3d, Tech  

    Just a small post to show you a video of one of the most famous, old-school demos : Second Reality, by Future Crew (1993).

    Back then you had to tweak your autoexec.bat and config.sys to get enough memory to run those beautiful demos. If you didn’t have a Gravis Ultrasound you didn’t always get sound. You also didn’t have any 3D card so know that this is all software rendering.

    YouTube Preview Image

    Merry Christmas ;)

    Fri 20 Jul 2007

    News on Brain Computer Interfaces

    Published at 9:11   Category Tech, VR Devices  

    I’m just returning from IPT/EGVE 2007, more on that later.

    There have been several interesting talks about BCI (Brain Computer Interfaces).

    >> Wheelchair control from thought

    The first one was from Prof. Dr. Gert Pfurtscheller from the Laboratory of Brain-Computer Interfaces, Graz University of Technology, Austria :
    Wheelchair control from thought: Simulation in an immersive virtual environment.

    Here are some notes I took during this session :

    The first thing to know is that those BCIs don’t read your thoughts. They won’t be able to know when you think ‘I want to go left’. They ‘simply’ discriminate between some thoughts, like thinking about moving your hands or feet by monitoring your motor cortex.
    But you need different strategies to match patterns, because not everyone will be able to ‘create’ the good thought that will be detected. For example for one subject, thinking about left hand (to go left) vs right hand (to go right), didn’t work well. They found out that asking the subject to think about moving his two feet vs right hand movement could lead to 100% discrimination.

    By examining only the feet motor zone in the brain, the patient successfully moved a wheelchair in a VE. You can see a video here, search for “EEG-based walking of a tetraplegic in virtual reality”

    Moreover, some completely paralyzed patients can only communicate through thougts, so BCIs could improve their lives.

    Yann Renard, who works with Anatole Lecuyer, has also explained to me that you can use another technique called the ‘steady state’ : you bring your attention to an oscillation, like a visual blink or a sound, and the activity of the auditory or visual cortex is synchronised with the frequency of the oscillation.

    More infos on BCI here : bci-info.org

    >> Intuition
    There was another talk by the Intuition Network (network of excellence focused on virtual reality) about Neural Interfaces, chaired by Roland Blach (Fraunhofer IAO, Stuttgart), with talks from Oliver Stefani (COAT Basel), Anatole Lecuyer (Inria) and Marc Erich Latoschik (University of Bielefeld).
    Neural interfaces could be used in VEs, not necessarily to have control over it (control the movements etc..), but more as input for adapting the VE to the user.

    A limiting factor is that these interfaces are not easy to setup, and the calibration procedure has to be potentially done for each different user.

    An interesting fact that was demonstrated is that it seems that sometimes, when you do a mistake, your unconscious mind notices it, but you still go with the conscious decision of doing the action. You read that right, the conscious and unconscious mind are fighting over what’s right and wrong !

    So they think that maybe one day the neural interfaces could be used to warn a user that he wants to perform an action but his unconscious mind doesn’t agree so maybe he should think about it.
    You could also use them to adapt the VE dynamically and in realtime to adapt to previous behaviours, desires, and support for the user’s cognitive and perceptual internal schemes. You could also create augmented cognition interfaces, that would adapt based on the cognitive workload, stress etc.

    >> OpenVibe

    Anatole Lecuyer presented the OpenVibe project.

    The goal of the OpenVibe project is to deliver a technological demonstrator and an OpenSource software to help developping BCI. As I don’t know the challenges of developping such applications, I can’t comment on the features of the software.

    Anatole said that BCI can be used to improve VR, but VR can also improve BCI.

    The results that I found the most interesting is that when using a BCI “helmet”, you only get electrical information from the surface of the brain. OpenVibe is able to recreate in realtime the3d electrical mental activity. This is useful for a more in depth information about the activity of brain zones. Moreover this allows to display in realtime and in 3D the activity of the brain. Maybe this realtime visualisation will allow us to have a better control on our brain activity and be able to modify it in realtime.

    Fri 6 Jul 2007

    EA STL

    Published at 9:28   Category C++, Game development  

    Electronic Arts has been writing their own set of C++ STL because the standard STL doesn’t fit gamedev constraints :

    Gaming platforms and game designs place requirements on game software which differ from requirements of other platforms. Most significantly, game software requires large amounts of memory but has a limited amount to work with. Gaming software is also faced with other limitations such as weaker processor caches, weaker CPUs, and non-default memory alignment requirements. A result of this is that game software needs to be careful with its use of memory and the CPU. The C++ standard library’s containers, iterators, and algorithms are potentially useful for a variety of game programming needs. However, weaknesses and omissions of the standard library prevent it from being ideal for high performance game software. Foremost among these weaknesses is the allocator model. An extended and partially redesigned replacement (EASTL) for the C++ standard library was implemented at Electronic Arts in order to resolve these weaknesses in a portable and consistent way. This paper describes game software development issues, perceived weaknesses of the current C++ standard, and the design of EASTL as a partial solution for these weaknesses.

    Wed 4 Jul 2007

    Comparing HMDs

    Published at 15:39   Category Product Review, VR Displays  

    Marc Bernatchez, from VResources, has an interesting article about comparing HMDs. He compares angular resolution, field of view, stereo overlapping, and the relevance of all these factors when compared to human visual abilities.

    For example, here’s the summary of the angular resolution analysis :

    Next Page >>>