Way back in 2008, I was due to go to Florence to present at a workshop on affective BCI as part of CHI. In the event, I was ill that morning and missed the trip and the workshop. As I’d prepared the presentation, I made a podcast for sharing with the workshop attendees. I dug it out of the vaults for this post because gaming and physiological computing is such an interesting topic.
The work is dated now, but basically I’m drawing a distinction between my understanding of BCI and biocybernetic adaptation. The former is an alternative means of input control within the HCI, the latter can be used to adapt the nature of the HCI. I also argue that BCI is ideally suited certain types of game mechanics because it will not work 100% of the time. I used the TV series “Heroes” to illustrate these kinds of mechanics, which I regret in hindsight, because I totally lost all enthusiasm for that show after series 1.
The original CHI paper for this presentation is available here.
[iframe width=”400″ height=”300″ src=”http://player.vimeo.com/video/32983880″]
Recent posts on the blog have concerned the topic of psychophysiology (or biometrics) and the evaluation of player experience. Based on those posts and the comments that followed, I decided to do a thought experiment.
Imagine that I work for a big software house who want to sell as many games as possible and ensure that their product (which costs on average $3-5 million to develop per platform) is as good as it possibly can be – and one of the suits from upstairs calls and asks me “how should we be using biometrics as part of our user experience evaluation? The equipment is expensive, its labour-intensive to analyse and nobody seems to understand what the data means.” (This sentiment is not exaggerated, I once presented a set of fairly ambiguous psychophysiological data to a fellow researcher who nodded purposefully and said “So the physiology stuff is voodoo.”)
Here’s a list of 10 things I would push for by way of a response.
This recent interview with Gabe Newell of Valve caught our interest because it’s so rare that a game developer talks publicly about the potential of physiological computing to enhance the experience of gamers. The idea of using live physiological data feeds in order to adapt computer games and enhance game play was first floated by Kiel in these papers way back in 2003 and 2005. Like Kiel, in my writings on this topic (Fairclough, 2007; 2008 – see publications here), I focused exclusively on two problems: (1) how to represent the state of the player, and (2) what could the software do with this representation of the player state. In other words, how can live physiological monitoring of the player state inform real-time software adaptation? For example, to make the game harder or to increase the music or to offer help (a set of strategies that Kiel summarised in three categories, challenge me/assist me/emote me)- but to make these adjustments in real time in order to enhance game play.
FutureLab have published a discussion paper entitled “Neurofeedback: is there a potential for use in education?” It’s interesting to read a report devoted to the practical uses of neurofeedback for non-clinical populations. In short, the report covers definitions of neurofeedback & example systems (including EEG-based games like Mindball and MindFlex) as background. Then, three potential uses of neurofeedback are considered: training for sports performance, training for artistic performance and training to treat ADHD. The report doesn’t draw any firm conclusions as might be expected given the absence of systematic research programmes (in education). Aside from flagging up a number of issues (intrusion, reliability, expense), it’s obvious that we don’t know how these techniques are best employed in an educational environment, i.e. how long do students need to use them? What kind of EEG changes are important? How might neurofeedback be combined with other training techniques?
As I see it, there are a number of distinct application domains to be considered: (1) neurofeedback to shift into the desired psychological state prior to learning experience or examination (drawn from sports neurofeedback), (2) adapting educational software in real-time to keep the learner motivated (to avoid disengagement or boredom), and (3) to teach children about biological systems using biofeedback games (self-regulation exercises plus human biology practical). I’m staying with non-clinical applications here but obviously the same approaches may be applied to ADHD.
(1) and (3) above both correspond to a traditional biofeedback paradigm where the user works with the processed biological signal to develop a degree of self-regulation, that hopefully with transfer with practice. (2) is more interesting in my opinion; in this case, the software is being adapted in order to personalise and optimise the learning process for that particular individual. In other words, an efficient psychological state for learning is being created in situ by dynamic software adaptation. This approach isn’t so good for encouraging self-regulatory strategies compared to traditional biofeedback, but I believe it is more potent for optimising the learning process itself.
The Wired games blog has an article about the next Wii-enabled installment of survival-horror classic Silent Hill coming later in the year. Full article is here. A couple of paragraphs at the end about Psych-profiling the players caught my attention which I’ve pasted below. The basic idea is that software monitors behavioural responses to the environment and adapts the gaming software accordingly. My guess is that it’s not as subtle as the creators claim below. IMO, here is an application crying out for the physiological computing approach. Imagine if we could develop a player profile based on both overt behavioural responses as well as covert psychophysiological reactions to different events. The more complexity you can work into your player profile, the more subtlety and personalisation can be achieved by software adaptation. Of course, as usual, this kind of probing of player experience comes with a range of data protection issues. If current events surrounding software privacy (e.g. Facebook, Phorm) are anything to go by, this is likely to be even more of a issue for future systems.
“The way that (most) games deal with interactivity can be quite simple and dull,” says Barlow. “You’re the big barbarian hero, do you want to save the maiden or not? Do you want to be good or evil? It’s slightly childish. The idea behind the psych profile is that the game is constantly monitoring what the player is doing, and it creates a very deep set of data around that, and every element of the game is changed and varied.” Barlow and Hulett wouldn’t talk, at this early stage, about what sorts of things might change due to how you play the game, or what kind of data the game collects about you as you play. In the trailer that Konami showed, a character flashed between two very different physical appearances — that could be one of the things that changes. The psych profile also sounds slightly sneaky. You won’t necessarily know that things have changed based on your gameplay style, says Hulett: “When you go online and talk about it with your friends, they wouldn’t know what you were talking about.”
“We’re trying to play on subconscious things. Pick up on things that you don’t know you’re giving away,” says Barlow.”