Physiological Computing F.A.Q.

This post is out of date, please see the dedicated FAQ page for the latest revisions.

1.  What is physiological computing?

Physiological Computing is a term used to describe any computing system that uses real-time physiological data as an input stream to control the user interface.  A physiological computing system takes psychophysiological information from the user, such as heart rate or brain activity, and uses these data to make the software respond in real-time.  The development of physiological computing is a multidisciplinary field of research involving contributions from psychology, neuroscience, engineering, & computer science.

2.  How does physiological computing work?

Physiological computing systems collect physiological signals, analyse them in real-time and use this analysis as an input for computer control.  This cycle of data collection, analysis, interpretation is encapsulated within a biocybernetic control loop.

This loop describes how eye movements may be captured and translated into up/down and left/right commands for cursor control.  The same flow of information can be used to represent how changes in electrocortical activity (EEG) of the brain can be used to control the movement of an avatar in a virtual world or to activate/deactivate system automation.  With respect to an affective computing application, a change in physiological activity, such as increased blood pressure, may indicate higher levels of frustration and the system may respond with help information.  The same cycle of collection-analysis-translation-response is apparent.  Alternatively, physiological data may be logged and simply represented to the user or a medical professional; this kind of ambulatory monitoring doesn’t involve human-computer communication but is concerned with the enhancement of human-human interaction.

3.  Give me some examples.
Researchers became interested in physiological computing in the 1990s.  A group based at NASA developed a system that measured user engagement (whether the person was paying attention or not) using the electrical activity of the brain.  This measure was used to control an autopilot facility during simulated flight deck operation.  If the person was paying attention, they were allowed to use the autopilot; if attention lapsed, the autopilot was switched off – therefore, prompting the pilot into manual control in order to re-engage with the task.

Physiological computing was also used by MIT Media Lab during their investigations into affective computing.  These researchers were interested in how psychophysiological data could represent the emotional status of the user – and enable the computer to respond to user emotion.  For example by offering help if the user was irritated by the system.

Physiological computing has been applied to a range of software application and technologies, such as: robotics (making robots aware of the psychological status of their human co-workers), telemedicine (using physiological data to diagnose both health and psychological state), computer-based learning (monitoring the attention and emotions of the student) and computer games.

4.  Is the Wii an example of physiological computing?
In a way.  The Wii monitors movement and translates that movement into a control input in the same way as a mouse.  Physiological computing, as defined here, is quite different.  First of all, these systems focus on hidden psychological states rather than obvious physical movements.  Secondly, the user doesn’t have to move or do anything to provide input to a physiological computing system.  What physiological computing does is monitor “hidden” aspects of behaviour.

5.  How is physiological computing different from Brain-Computer Interfaces?
Brain-Computer Interfaces (BCI) are a category of system where the user self-regulates their physiology in order to provide input control to a computer system.  For example, a user may self-regulate activity in the EEG (electroencelogram – electrical activity of the brain) in order to move a cursor on the computer screen.  Effectively, BCIs offer an alternative to conventional input devices, such as the keyboard or mouse, which is particularly useful for people with disabilities.

There is some overlap between physiological computing and BCIs, but also some important differences.  The physiological computing approach has been compared to “wiretapping” in the sense that it monitors changes in user psychology without requiring the user to take explicit action.  Use of a BCI is associated with intentional control and requires a period of training prior to use.

6.  OK.  But the way you describe physiological computing sounds like a Biofeedback system….
There is some crossover between the approach used by physiological computing and biofeedback therapies.  But like BCI, biofeedback is designed to help people self-regulate their physiological activity, i.e. to reduce the rate of breathing for those who suffer from panic attacks.  There is some evidence that exposing a person to a physiological computing system may prompt improved self-regulation of physiology – simply because changes at the interface of a physiological computer may be meaningful to the user, i.e. if the computer does this, it means I’m stressed and need to relax.

The use of computer games to enhance biofeedback training represents the type of system that brings both physiological computing and biofeedback together.  For example, systems have been developed to treat Attention-Deficit Hyperactivity Disorder (ADHD) where children are trained to control brain activity by playing a computer game – see this link for more info.

7.  Can I buy a physiological computer?
You can buy systems that use psychophysiology for human-computer interaction.  For example, a number of headsets are on the market that have been developed by Emotiv and Neurosky to be used as an alternative to a keyboard or mouse.  At the moment, commercial systems fall mainly into the BCI application domain.  There are also a number of biofeedback games that also fall into the category of physiological computing, such as The Wild Divine .

8.  What do you need in order to create a physiological computer?
In terms of hardware, you need psychophysiological sensors (such as a GSR sensor or heart rate monitoring apparatus or EEG electrodes) that are connected to an analogue-digital converter.  These digital signals can be streamed to a computer via ethernet.  On the software side, you need an API or equivalent to access the signals and you’ll need to develop software that converts incoming physiological signals into a variable that can be used as a potential control input to an existing software package, such as a game.  Of course, none of this is straightforward because you need to understand something about psycho-physiological associations (i.e. how changes in physiology can be interpreted in psychological terms) in order to make your system work.

9.  What is it like that I have experienced?
That’s hard to say because there isn’t very much apparatus like this generally available.  If you’ve ever worn ECG sensors in either a clinical or sporting setting, you’ll know what it’s like to see your physiological activity “mirrored” in this way.  That’s one aspect.  The closest equivalent is biofeedback, where physiological data is represented as a visual display or a sound in real-time, but biofeedback is relatively specialised and used mainly to treat clinical problems.

10.  A lot of the technology involved sounds ‘medical’. Is this something hospitals would use?
The sensor technology is widely used by medical professionals to diagnose physiological problems and to monitor physiological activity.  Physiological computing represents an attempt to bring this technology to a more mainstream population by using the same monitoring technology to improve human-computer interaction.  In order to do this, it’s important to move the sensor technology from the static systems where the person is tethered by wires (as used by hospitals) to mobile, lightweight sensor apparatus that people can wear comfortably and unhindered as they work and play.

11.  Who is working on this stuff?
Physiological computing is inherently multidisciplinary.  The business of deciding which signals to use and how they represent the psychological state of the user is the domain of psychophysiology (i.e. inferring psychological significance from physiological signals).  Real-time data analysis falls into the area of signal processing that can involve professionals with backgrounds in computing, mathematics and engineering.  Designing wearable sensor apparatus capable of delivering good signals outside of the lab or clinical environment is of interest to people working in engineering and telemedicine.  Deciding how to use psychophysiological signals to drive real-time adaptation is the domain of computer scientists, particularly those interested in human-computer interaction and human factors.

12.  What can a physiological computer allow me to do that is new?
Physiological computing has the potential to offer a new scenario for how we communicate with computers.  At the moment, human-computer communication is asymmetrical with respect to information exchange.  Therefore, your computer can tell you lots of things about itself, such as: memory usage, download speed etc.  But the computer is essentially in the dark about the person on the other side of the interaction.  That’s when the computer tries to ‘second-guess’ the next thing you want to do, it normally gets it wrong, e.g. the Microsoft paperclip.  By allowing the computer to access a representation of the user state, we open up the possibility of symmetrical human-computer interaction – where ‘smart’ systems adapt themselves to user behaviour in a way that’s both intuitive and timely.  Therefore, in theory at least, we get help from the computer when we really need it.  If the computer game is boring, the software knows to make the game more challenging.  More than this, by making the computer aware of our internal state, we allow software to personalise its performance to that person with a degree of accuracy.

13.  Will these systems be able to read my mind?
Psychophysiological measures can provide an indication of a person’s emotional status.  For instance, it can measure whether you are alert or tired or whether you are relaxed or tense.  There is some evidence that it can distinguish between positive and negative mood states.  The same measures can also capture whether a person is mentally engaged with a task or not.  Whether this counts as ‘reading your mind’ or not depends on your definition.  The system would not be able to diagnose whether you were thinking about making a grilled cheese sandwich or a salad for lunch.

14.  What about the privacy of my data?
Good question.  Physiological computing inevitably involves a sustained period of monitoring the user.  This information is, by definition, highly sensitive.  An intruder could monitor the ebb and flow of user mood over a period of time.  If the intruder could access software activity as well as physiology, he or she could determine whether this web site or document elicited a certain reaction from the user or not.  Most of us regard our unexpressed emotional responses as personal and private information.  In addition, data collected via physiological computing could potentially be used to indicate medical conditions such as high blood pressure or heart arrhythmia.  Privacy and data protection are huge issues for this kind of technology.  It is important that the user exercises ultimate control with respect to: (1) what is being measured, (2) where it is being stored, and (3) who has access to that information.

15.  Where can I find out more?
There are a number of written and online sources regarding physiological computing.  Almost all have been written for an academic audience.  Here are a number of review articles:

Allanson, J. (2002, March 2002). Electrophysiologically interactive computer systems. IEEE Magazine.
Fairclough, S. H. 2009. Fundamentals of physiological computing.  Interacting with Computers, 21, 133-145.
Gilleade, K. M., Dix, A., & Allanson, J. (2005). Affective videogames and modes of affective gaming: Assist me, challenge me, emote me. Paper presented at the Proceedings of DiGRA 2005.
Picard, R. W., & Klein, J. (2002). Computers that recognise and respond to user emotion: Theoretical and practical implications. Interacting With Computers, 14, 141-169.

Share This:

Psych-Profiling in Games

The Wired games blog has an article about the next Wii-enabled installment of survival-horror classic Silent Hill coming later in the year.  Full article is here.  A couple of paragraphs at the end about Psych-profiling the players caught my attention which I’ve pasted below.  The basic idea is that software monitors behavioural responses to the environment and adapts the gaming software accordingly.  My guess is that it’s not as subtle as the creators claim below.  IMO, here is an application crying out for the physiological computing approach.  Imagine if we could develop a player profile based on both overt behavioural responses as well as covert psychophysiological reactions to different events.  The more complexity you can work into your player profile, the more subtlety and personalisation can be achieved by software adaptation.  Of course, as usual, this kind of probing of player experience comes with a range of data protection issues.  If current events surrounding software privacy (e.g. Facebook, Phorm) are anything to go by, this is likely to be even more of a issue for future systems.

“The way that (most) games deal with interactivity can be quite simple and dull,” says Barlow. “You’re the big barbarian hero, do you want to save the maiden or not? Do you want to be good or evil? It’s slightly childish. The idea behind the psych profile is that the game is constantly monitoring what the player is doing, and it creates a very deep set of data around that, and every element of the game is changed and varied.”  Barlow and Hulett wouldn’t talk, at this early stage, about what sorts of things might change due to how you play the game, or what kind of data the game collects about you as you play. In the trailer that Konami showed, a character flashed between two very different physical appearances — that could be one of the things that changes.  The psych profile also sounds slightly sneaky. You won’t necessarily know that things have changed based on your gameplay style, says Hulett: “When you go online and talk about it with your friends, they wouldn’t know what you were talking about.”

“We’re trying to play on subconscious things. Pick up on things that you don’t know you’re giving away,” says Barlow.”

Share This:

The European Future Technologies Conference

The European Future Tech conference has the catchy title “Science Beyond Fiction” and is organised by the Future & Emerging Technologies (FET) division of the European Commission.  I’m involved in the REFLECT project and we’re doing a conference session about our work on 22nd April.

Share This:

Manipulating vs. Mirroring

In preparing a “futuristic” talk about Physiological Computing, I’m pondering how a system might adapt itself to physiological data indicating that the user just got upset or bored or exasperated.  In the past, I’ve focused on the Gilleade et al (2005) classification where the system may help the user, challenge the user or emote the user.  In my view, whether these adaptations are overt or covert, what the system is attempting to do is manipulate the state of the user in a desired direction (generally to preserve task engagement and minimise those states that may disrupt engagement).  On the other hand, the system could simply mirror the psychological state of the user.  This mirroring approach comes in two categories.  First of all, to mimick the state of the user in order to covey empathy; for example, the RoCo project at MIT.  Alternatively, the system could simply mirror the state of the user using a biofeedback-type display in order to increase self-awareness and promote self-regulation.  The distinction between mirroring and manipulating is fairly subtle.  Adaptive responses designed to manipulate will also act as mirrors once the user cottons on to the mechanics of system design.

Share This: