Affective Comp/BCI workshop – deadline extension

The workshop on affective computing and BCI in Amsterdam this September has extended its deadline to 22nd June for all papers.  Website for workshop here

Share This:

Neurofeedback in Education

FutureLab have published a discussion paper entitled “Neurofeedback: is there a potential for use in education?”  It’s interesting to read a report devoted to the practical uses of neurofeedback for non-clinical populations.  In short, the report covers definitions of neurofeedback & example systems (including EEG-based games like Mindball and MindFlex) as background.  Then, three potential uses of neurofeedback are considered: training for sports performance, training for artistic performance and training to treat ADHD.  The report doesn’t draw any firm conclusions as might be expected given the absence of systematic research programmes (in education).  Aside from flagging up a number of issues (intrusion, reliability, expense), it’s obvious that we don’t know how these techniques are best employed in an educational environment, i.e. how long do students need to use them? What kind of EEG changes are important?  How might neurofeedback be combined with other training techniques?

As I see it, there are a number of distinct application domains to be considered: (1) neurofeedback to shift into the desired psychological state prior to learning experience or examination (drawn from sports neurofeedback), (2) adapting educational software in real-time to keep the learner motivated (to avoid disengagement or boredom), and (3) to teach children about biological systems using biofeedback games (self-regulation exercises plus human biology practical).  I’m staying with non-clinical applications here but obviously the same approaches may be applied to ADHD.

(1) and (3) above both correspond to a traditional biofeedback paradigm where the user works with the processed biological signal to develop a degree of self-regulation, that hopefully with transfer with practice.  (2) is more interesting in my opinion; in this case, the software is being adapted in order to personalise and optimise the learning process for that particular individual.  In other words, an efficient psychological state for learning is being created in situ by dynamic software adaptation.  This approach isn’t so good for encouraging self-regulatory strategies compared to traditional biofeedback, but I believe it is more potent for optimising the learning process itself.

Share This:

Formalising the unformalisable

Research into affective computing has prompted a question from some in the HCI community about formalising the unformalisable.  This is articulated in this 2005 paper by Kirsten Boehner and colleagues.  In essence, the argument goes like this – given that emotion and cognition are embodied biopsychological phenomena, can we ever really “transmit” the experience to a computer?  Secondly, if we try to convey emotions to a computer, don’t we just trivialise the experience by converting it into another type of cold, quantified information.  Finally, hasn’t the computing community already had its fingers burned by attempts to have machines replicate cognitive phenomenon with very little results (e.g. AI research in the 80’s).

OK.  The first argument seems spurious to me.  Physiological computing or affective computing will never transmit an exact representation of private psychological events.  That’s just setting the bar too high.  What physiological computing can do is operationalise the psychological experience, i.e. to represent a psychological event or continuum in a quantified, objective fashion that should be meaningfully associated with the experience of that psychological event.  As you can see, we’re getting into deep waters already here.  The second argument is undeniable but I don’t understand why it is a criticism.  Of course we are taking an experience that is private, personal and subjective and converting it into numbers.  But that’s what the process of psychophysiological measurement is all about – moving from the realm of experience to the realm of quantified representation.  After all, if you studied an ECG trace of a person in the midst of a panic attack, you wouldn’t expect to experience a panic attack yourself, would you?  Besides, converting emotions into numbers is the only way a computer has to represent psychological status.

As for the last argument, I’m on unfamiliar ground here, but I hope the HCI community can learn from the past mistakes; specifically, being too literal and unrealistically ambitious.  Unfortunately the affective computing debate sometimes seems to run down these well-trodden paths.  I’ve read papers where researchers ponder how computers will ‘feel’ emotions or whether the whole notion of emotional computing is an oxymoron.   Getting computers to represent the psychological status of users is a relative business that needs to take a couple of baby steps before we try and run.

Share This:

CHI workshop 2005

Just to show how out of touch I am with CHI stuff, I stumbled upon a workshop entitled “evaluating affective interfaces – innovative approaches”  this afternoon.  Only 4 years after the actual event.  Here’s a link to the web page with details of all papers.

Share This:

Designing for the gullible

There’s a nice article in todays Guardian by Charles Arthur regarding user gullibility in the face of technological systems.  In this case, he’s talking about the voice risk analysis (VRA) software used by local councils and insurance companies to detect fraud (see related article by same author), which performs fairly poorly when evaluated, but is reckoned by those bureaucrats who purchased the system to be a huge money-saver.  The way it works is this – operator receives a probability that the claimant is lying (based on “brain traces in the voice” – in reality probably changes in the fundamental frequency and pitch of the voice), and on this basis,  may elect to ask more detailed questions.

Charles Arthur makes the point that we’re naive and gullible when faced with a technological diagnosis.  And this is fair point, whether it’s the voice analysis system or a physiological computing system providing feedback that you’re happy or tired or anxious.  Why do we tend to yield to computerised diagnosis?  In my view, you can blame science for that – in our positivist culture, cold objective numbers will always trump warm subjective introspection.  The first experimental psychologist, Wilhem Wundt (1832-1920) pointed to this dichotomy when he distinguished between mediated and unmediated consciousness.  The latter is linked to introspection whereas the former demands the intervention of an instrument or technology.  If you go outside on an icy day and say to yourself “it’s cold today” – your consciousness is unmediated.  If you supplement this insight by reading a thermometer “wow, two degrees below zero”  – that’s mediated consciousness.  One is broadly true from that person’s perspective whereas the other is precise from point of view of almost anyone.

The main point of today’s article is that we tend to trust technological diagnosis even when the scientific evidence supporting system performance is flawed (as is claimed in the case of the VRA system).  Again, true enough – but in fairness, most users of the VRA didn’t get the chance to review the system evaluation data.  The staff are trained to believe the system by the company rep who sold the system and trained them how to use it.  From the perspective of the customers, insurance staff may have suddenly started to ask them a lot of detailed questions, which indicated their stories were not believed, which probably made the customers agitated and anxious, therefore raising the pitch of the voice and turning themselves from possibles to definites.  The VRA system works very well in this context because nobody really knew how it worked or even whether it worked.

What does all this mean for physiological computing?  First of all, system designers and users must accept that psychophysiological measurement will never give a perfect, isomorphic, one-to-one model of human experience.  The system builds a model of the user state, not a perfect representation.  Given this restriction, system designers must be clever in terms of providing feedback to the user.  Explicit and continuous feedback from the system is likely to undermine the credibility of the system in the eyes of the user.  Users of physiological computing systems must be sufficiently informed to understand that feedback from the system is an educated assessment.

The construction of physiological computing systems is a bridge-building exercise in some ways – a link between the nervous system and the computer chip.  Unlike similar constructions, this bridge is unlikely to ever meet in the middle.  For that to happen, the user must rely his or her gullibility to make the necessary leap of faith to close the circuit.  Unrealistic expectation will lead to eventual disappointment and disillusionment, conservative cynicism and suspicious will leave the whole physiological computing concept stranded at the starting gate – it’s up to designers to build interfaces that lead the user down the middle path.

Share This:

Physiological Computing F.A.Q.

This post is out of date, please see the dedicated FAQ page for the latest revisions.

1.  What is physiological computing?

Physiological Computing is a term used to describe any computing system that uses real-time physiological data as an input stream to control the user interface.  A physiological computing system takes psychophysiological information from the user, such as heart rate or brain activity, and uses these data to make the software respond in real-time.  The development of physiological computing is a multidisciplinary field of research involving contributions from psychology, neuroscience, engineering, & computer science.

2.  How does physiological computing work?

Physiological computing systems collect physiological signals, analyse them in real-time and use this analysis as an input for computer control.  This cycle of data collection, analysis, interpretation is encapsulated within a biocybernetic control loop.

This loop describes how eye movements may be captured and translated into up/down and left/right commands for cursor control.  The same flow of information can be used to represent how changes in electrocortical activity (EEG) of the brain can be used to control the movement of an avatar in a virtual world or to activate/deactivate system automation.  With respect to an affective computing application, a change in physiological activity, such as increased blood pressure, may indicate higher levels of frustration and the system may respond with help information.  The same cycle of collection-analysis-translation-response is apparent.  Alternatively, physiological data may be logged and simply represented to the user or a medical professional; this kind of ambulatory monitoring doesn’t involve human-computer communication but is concerned with the enhancement of human-human interaction.

3.  Give me some examples.
Researchers became interested in physiological computing in the 1990s.  A group based at NASA developed a system that measured user engagement (whether the person was paying attention or not) using the electrical activity of the brain.  This measure was used to control an autopilot facility during simulated flight deck operation.  If the person was paying attention, they were allowed to use the autopilot; if attention lapsed, the autopilot was switched off – therefore, prompting the pilot into manual control in order to re-engage with the task.

Physiological computing was also used by MIT Media Lab during their investigations into affective computing.  These researchers were interested in how psychophysiological data could represent the emotional status of the user – and enable the computer to respond to user emotion.  For example by offering help if the user was irritated by the system.

Physiological computing has been applied to a range of software application and technologies, such as: robotics (making robots aware of the psychological status of their human co-workers), telemedicine (using physiological data to diagnose both health and psychological state), computer-based learning (monitoring the attention and emotions of the student) and computer games.

4.  Is the Wii an example of physiological computing?
In a way.  The Wii monitors movement and translates that movement into a control input in the same way as a mouse.  Physiological computing, as defined here, is quite different.  First of all, these systems focus on hidden psychological states rather than obvious physical movements.  Secondly, the user doesn’t have to move or do anything to provide input to a physiological computing system.  What physiological computing does is monitor “hidden” aspects of behaviour.

5.  How is physiological computing different from Brain-Computer Interfaces?
Brain-Computer Interfaces (BCI) are a category of system where the user self-regulates their physiology in order to provide input control to a computer system.  For example, a user may self-regulate activity in the EEG (electroencelogram – electrical activity of the brain) in order to move a cursor on the computer screen.  Effectively, BCIs offer an alternative to conventional input devices, such as the keyboard or mouse, which is particularly useful for people with disabilities.

There is some overlap between physiological computing and BCIs, but also some important differences.  The physiological computing approach has been compared to “wiretapping” in the sense that it monitors changes in user psychology without requiring the user to take explicit action.  Use of a BCI is associated with intentional control and requires a period of training prior to use.

6.  OK.  But the way you describe physiological computing sounds like a Biofeedback system….
There is some crossover between the approach used by physiological computing and biofeedback therapies.  But like BCI, biofeedback is designed to help people self-regulate their physiological activity, i.e. to reduce the rate of breathing for those who suffer from panic attacks.  There is some evidence that exposing a person to a physiological computing system may prompt improved self-regulation of physiology – simply because changes at the interface of a physiological computer may be meaningful to the user, i.e. if the computer does this, it means I’m stressed and need to relax.

The use of computer games to enhance biofeedback training represents the type of system that brings both physiological computing and biofeedback together.  For example, systems have been developed to treat Attention-Deficit Hyperactivity Disorder (ADHD) where children are trained to control brain activity by playing a computer game – see this link for more info.

7.  Can I buy a physiological computer?
You can buy systems that use psychophysiology for human-computer interaction.  For example, a number of headsets are on the market that have been developed by Emotiv and Neurosky to be used as an alternative to a keyboard or mouse.  At the moment, commercial systems fall mainly into the BCI application domain.  There are also a number of biofeedback games that also fall into the category of physiological computing, such as The Wild Divine .

8.  What do you need in order to create a physiological computer?
In terms of hardware, you need psychophysiological sensors (such as a GSR sensor or heart rate monitoring apparatus or EEG electrodes) that are connected to an analogue-digital converter.  These digital signals can be streamed to a computer via ethernet.  On the software side, you need an API or equivalent to access the signals and you’ll need to develop software that converts incoming physiological signals into a variable that can be used as a potential control input to an existing software package, such as a game.  Of course, none of this is straightforward because you need to understand something about psycho-physiological associations (i.e. how changes in physiology can be interpreted in psychological terms) in order to make your system work.

9.  What is it like that I have experienced?
That’s hard to say because there isn’t very much apparatus like this generally available.  If you’ve ever worn ECG sensors in either a clinical or sporting setting, you’ll know what it’s like to see your physiological activity “mirrored” in this way.  That’s one aspect.  The closest equivalent is biofeedback, where physiological data is represented as a visual display or a sound in real-time, but biofeedback is relatively specialised and used mainly to treat clinical problems.

10.  A lot of the technology involved sounds ‘medical’. Is this something hospitals would use?
The sensor technology is widely used by medical professionals to diagnose physiological problems and to monitor physiological activity.  Physiological computing represents an attempt to bring this technology to a more mainstream population by using the same monitoring technology to improve human-computer interaction.  In order to do this, it’s important to move the sensor technology from the static systems where the person is tethered by wires (as used by hospitals) to mobile, lightweight sensor apparatus that people can wear comfortably and unhindered as they work and play.

11.  Who is working on this stuff?
Physiological computing is inherently multidisciplinary.  The business of deciding which signals to use and how they represent the psychological state of the user is the domain of psychophysiology (i.e. inferring psychological significance from physiological signals).  Real-time data analysis falls into the area of signal processing that can involve professionals with backgrounds in computing, mathematics and engineering.  Designing wearable sensor apparatus capable of delivering good signals outside of the lab or clinical environment is of interest to people working in engineering and telemedicine.  Deciding how to use psychophysiological signals to drive real-time adaptation is the domain of computer scientists, particularly those interested in human-computer interaction and human factors.

12.  What can a physiological computer allow me to do that is new?
Physiological computing has the potential to offer a new scenario for how we communicate with computers.  At the moment, human-computer communication is asymmetrical with respect to information exchange.  Therefore, your computer can tell you lots of things about itself, such as: memory usage, download speed etc.  But the computer is essentially in the dark about the person on the other side of the interaction.  That’s when the computer tries to ‘second-guess’ the next thing you want to do, it normally gets it wrong, e.g. the Microsoft paperclip.  By allowing the computer to access a representation of the user state, we open up the possibility of symmetrical human-computer interaction – where ‘smart’ systems adapt themselves to user behaviour in a way that’s both intuitive and timely.  Therefore, in theory at least, we get help from the computer when we really need it.  If the computer game is boring, the software knows to make the game more challenging.  More than this, by making the computer aware of our internal state, we allow software to personalise its performance to that person with a degree of accuracy.

13.  Will these systems be able to read my mind?
Psychophysiological measures can provide an indication of a person’s emotional status.  For instance, it can measure whether you are alert or tired or whether you are relaxed or tense.  There is some evidence that it can distinguish between positive and negative mood states.  The same measures can also capture whether a person is mentally engaged with a task or not.  Whether this counts as ‘reading your mind’ or not depends on your definition.  The system would not be able to diagnose whether you were thinking about making a grilled cheese sandwich or a salad for lunch.

14.  What about the privacy of my data?
Good question.  Physiological computing inevitably involves a sustained period of monitoring the user.  This information is, by definition, highly sensitive.  An intruder could monitor the ebb and flow of user mood over a period of time.  If the intruder could access software activity as well as physiology, he or she could determine whether this web site or document elicited a certain reaction from the user or not.  Most of us regard our unexpressed emotional responses as personal and private information.  In addition, data collected via physiological computing could potentially be used to indicate medical conditions such as high blood pressure or heart arrhythmia.  Privacy and data protection are huge issues for this kind of technology.  It is important that the user exercises ultimate control with respect to: (1) what is being measured, (2) where it is being stored, and (3) who has access to that information.

15.  Where can I find out more?
There are a number of written and online sources regarding physiological computing.  Almost all have been written for an academic audience.  Here are a number of review articles:

Allanson, J. (2002, March 2002). Electrophysiologically interactive computer systems. IEEE Magazine.
Fairclough, S. H. 2009. Fundamentals of physiological computing.  Interacting with Computers, 21, 133-145.
Gilleade, K. M., Dix, A., & Allanson, J. (2005). Affective videogames and modes of affective gaming: Assist me, challenge me, emote me. Paper presented at the Proceedings of DiGRA 2005.
Picard, R. W., & Klein, J. (2002). Computers that recognise and respond to user emotion: Theoretical and practical implications. Interacting With Computers, 14, 141-169.

Share This:

Mobile Heart Health

There’s a short summary of a project called ‘Mobile Heart Health’ in the latest issue of IEEE Pervasive Computing (April-June 2009).  The project was conducted at Intel Labs and uses an ambulatory ECG sensor to connect to a mobile telephone.  The ECG monitors heart rate variability; if high stress is detected, the user is prompted by the phone to run through a number of relaxation therapies (controlled breathing) to provide ‘just-in-time’ stress management.  It’s an interesting project, both in conceptual terms (I imagine pervasive monitoring and stress management would be particularly useful for cardiac outpatients) and in terms of interface design (how to alert the stressed user to their stressed state without making them even more stressed).  Here’s a link to the magazine which includes a downloadable pdf of the article.

Share This:

Psych-Profiling in Games

The Wired games blog has an article about the next Wii-enabled installment of survival-horror classic Silent Hill coming later in the year.  Full article is here.  A couple of paragraphs at the end about Psych-profiling the players caught my attention which I’ve pasted below.  The basic idea is that software monitors behavioural responses to the environment and adapts the gaming software accordingly.  My guess is that it’s not as subtle as the creators claim below.  IMO, here is an application crying out for the physiological computing approach.  Imagine if we could develop a player profile based on both overt behavioural responses as well as covert psychophysiological reactions to different events.  The more complexity you can work into your player profile, the more subtlety and personalisation can be achieved by software adaptation.  Of course, as usual, this kind of probing of player experience comes with a range of data protection issues.  If current events surrounding software privacy (e.g. Facebook, Phorm) are anything to go by, this is likely to be even more of a issue for future systems.

“The way that (most) games deal with interactivity can be quite simple and dull,” says Barlow. “You’re the big barbarian hero, do you want to save the maiden or not? Do you want to be good or evil? It’s slightly childish. The idea behind the psych profile is that the game is constantly monitoring what the player is doing, and it creates a very deep set of data around that, and every element of the game is changed and varied.”  Barlow and Hulett wouldn’t talk, at this early stage, about what sorts of things might change due to how you play the game, or what kind of data the game collects about you as you play. In the trailer that Konami showed, a character flashed between two very different physical appearances — that could be one of the things that changes.  The psych profile also sounds slightly sneaky. You won’t necessarily know that things have changed based on your gameplay style, says Hulett: “When you go online and talk about it with your friends, they wouldn’t know what you were talking about.”

“We’re trying to play on subconscious things. Pick up on things that you don’t know you’re giving away,” says Barlow.”

Share This:

The European Future Technologies Conference

The European Future Tech conference has the catchy title “Science Beyond Fiction” and is organised by the Future & Emerging Technologies (FET) division of the European Commission.  I’m involved in the REFLECT project and we’re doing a conference session about our work on 22nd April.

Share This:

Adaptive & Emergent Behaviour and Complex Systems’09

The PERADA project has asked me to talk about Biocybernetic Adaptation as part of a half-day Pervasive Computing workshop at the AISB conference in Edinburgh

Share This: