News on Brain Computer Interfaces

I’m just returning from IPT/EGVE 2007, more on that later.

There have been several interesting talks about BCI (Brain Computer Interfaces).

>> Wheelchair control from thought

The first one was from Prof. Dr. Gert Pfurtscheller from the Laboratory of Brain-Computer Interfaces, Graz University of Technology, Austria :
Wheelchair control from thought: Simulation in an immersive virtual environment.

Here are some notes I took during this session :

The first thing to know is that those BCIs don’t read your thoughts. They won’t be able to know when you think ‘I want to go left’. They ‘simply’ discriminate between some thoughts, like thinking about moving your hands or feet by monitoring your motor cortex.
But you need different strategies to match patterns, because not everyone will be able to ‘create’ the good thought that will be detected. For example for one subject, thinking about left hand (to go left) vs right hand (to go right), didn’t work well. They found out that asking the subject to think about moving his two feet vs right hand movement could lead to 100% discrimination.

By examining only the feet motor zone in the brain, the patient successfully moved a wheelchair in a VE. You can see a video here, search for “EEG-based walking of a tetraplegic in virtual reality”

Moreover, some completely paralyzed patients can only communicate through thougts, so BCIs could improve their lives.

Yann Renard, who works with Anatole Lecuyer, has also explained to me that you can use another technique called the ‘steady state’ : you bring your attention to an oscillation, like a visual blink or a sound, and the activity of the auditory or visual cortex is synchronised with the frequency of the oscillation.

More infos on BCI here : bci-info.org

>> Intuition
There was another talk by the Intuition Network (network of excellence focused on virtual reality) about Neural Interfaces, chaired by Roland Blach (Fraunhofer IAO, Stuttgart), with talks from Oliver Stefani (COAT Basel), Anatole Lecuyer (Inria) and Marc Erich Latoschik (University of Bielefeld).
Neural interfaces could be used in VEs, not necessarily to have control over it (control the movements etc..), but more as input for adapting the VE to the user.

A limiting factor is that these interfaces are not easy to setup, and the calibration procedure has to be potentially done for each different user.

An interesting fact that was demonstrated is that it seems that sometimes, when you do a mistake, your unconscious mind notices it, but you still go with the conscious decision of doing the action. You read that right, the conscious and unconscious mind are fighting over what’s right and wrong !

So they think that maybe one day the neural interfaces could be used to warn a user that he wants to perform an action but his unconscious mind doesn’t agree so maybe he should think about it.
You could also use them to adapt the VE dynamically and in realtime to adapt to previous behaviours, desires, and support for the user’s cognitive and perceptual internal schemes. You could also create augmented cognition interfaces, that would adapt based on the cognitive workload, stress etc.

>> OpenVibe

Anatole Lecuyer presented the OpenVibe project.

The goal of the OpenVibe project is to deliver a technological demonstrator and an OpenSource software to help developping BCI. As I don’t know the challenges of developping such applications, I can’t comment on the features of the software.

Anatole said that BCI can be used to improve VR, but VR can also improve BCI.

The results that I found the most interesting is that when using a BCI “helmet”, you only get electrical information from the surface of the brain. OpenVibe is able to recreate in realtime the3d electrical mental activity. This is useful for a more in depth information about the activity of brain zones. Moreover this allows to display in realtime and in 3D the activity of the brain. Maybe this realtime visualisation will allow us to have a better control on our brain activity and be able to modify it in realtime.

You May Also Like

3 Comments

  1. Did you know
    “An opportunity is available for a Ph. D. student to work on development of Distributed Machine Learning Algorithms for EEG (Timeseries) Data. A significant amount of theoretical work needs to be done in addition to implementation using the Map Reduce programming paradigm. The EEG dataset is relatively large, approximately 30 TB and will reside on a cluster (such as the Amazon EC2 / S3). The project is a collaboration with the Computational Neurophysiology Laboratory (CNL) at the Columbia University Medical School and is currently funded by the National Science Foundation.”
    I believe this is a promising direction.

Leave a Reply

Your email address will not be published. Required fields are marked *