BCI, biocybernetic control and gaming

Way back in 2008, I was due to go to Florence to present at a workshop on affective BCI as part of CHI. In the event, I was ill that morning and missed the trip and the workshop. As I’d prepared the presentation, I made a podcast for sharing with the workshop attendees. I dug it out of the vaults for this post because gaming and physiological computing is such an interesting topic.

The work is dated now, but basically I’m drawing a distinction between my understanding of BCI and biocybernetic adaptation. The former is an alternative means of input control within the HCI, the latter can be used to adapt the nature of the HCI. I also argue that BCI is ideally suited certain types of game mechanics because it will not work 100% of the time. I used the TV series “Heroes” to illustrate these kinds of mechanics, which I regret in hindsight, because I totally lost all enthusiasm for that show after series 1.

The original CHI paper for this presentation is available here.

 [iframe width=”400″ height=”300″ src=”http://player.vimeo.com/video/32983880″]

Share This:

The Ultimate Relax to Win Dynamic

I came across an article in a Sunday newspaper a couple of weeks ago about an artist called xxxy who has created an installation using a BCI of sorts.  I’m piecing this together from what I read in the paper and what I could see on his site, but the general idea is this: person wears a portable EEG rig (I don’t recognise the model) and is placed in a harness with wires reaching up and up and up into the ceiling.  The person closes their eyes and relaxes – presumably as they enter a state of alpha augmentation, they begin to levitate courtesy of the wires.  The more that they relax or the longer they sustain that state, the higher they go.  It’s hard to tell from the video, but the person seems to be suspended around 25-30 feet in the air.

Continue reading

Share This:

Valve experimenting with physiological input for games

This recent interview with Gabe Newell of Valve caught our interest because it’s so rare that a game developer talks publicly about the potential of physiological computing to enhance the experience of gamers.  The idea of using live physiological data feeds in order to adapt computer games and enhance game play was first floated by Kiel  in these papers way back in 2003 and 2005.  Like Kiel, in my writings on this topic (Fairclough,   2007; 2008 – see publications here), I focused exclusively on two problems: (1) how to represent the state of the player, and (2) what could the software do with this representation of the player state.  In other words, how can live physiological monitoring of the player state inform real-time software adaptation?  For example, to make the game harder or to increase the music or to offer help (a set of strategies that Kiel summarised in three categories, challenge me/assist me/emote me)- but to make these adjustments in real time in order to enhance game play.

Continue reading

Share This: