This is your brain giving up

Like a lot of people, I came to the area of physiological computing via affective computing.  The early work I read placed enormous emphasis on how systems may distinguish different categories of emotion, e.g. frustration vs. happiness.  This is important for some applications, but most of all I was interested in user states that related to task performance, specifically those states that might precede and predict a breakdown of performance.  The latter can take several forms, the quality of performance can collapse because the task is too complex to figure out or you’re too tired or too drunk etc.  What really interested me was how performance collapsed when people simply gave up or ‘exhibited insufficient motivation’ as the psychological textbooks would say.

People can give up for all kinds of reasons – they may be insufficiently challenged (i.e. bored), they may be frustrated because the task is too hard, they may simply have something better to do.  The prediction of motivation or task engagement seems very important to me for biocybernetic adaptation applications, such as games and educational software. Several psychology research groups have looked at this issue by studying psychophysiological changes accompanying changes in motivation and responses to increased task demand.  A group led by Alan Gevins performed a number of studies where they incrementally ramped up task demand; they found that theta activity in the EEG increased in line with task demands.  They noted this increase was specific to the frontal-central area of the brain.

We partially replicated one of Gevins’ studies last year and found support for changes in frontal theta.  We tried to make the task very difficult so people would give up but were not completely successful (when you pay people to come to your lab, they tend to try really hard).  So we did a second study, this time making the ‘impossible’ version of the task really impossible.  The idea was to expose people to low, high and extremely high levels of memory load.  In order to make the task impossible, we also demanded participants hit a minimum level of performance, which was modest for the low demand condition and insanely high for the extremely high demand task.  We also had our participants do each task on two occasions; once with the chance to win cash incentives and once without.

The results for the frontal theta are shown in the graphic below.  You can clearly see the frontal-central location of the activity (nb: the more red the area, the more theta activity was present).  What’s particularly interesting and especially clear in the incentive condition (top row of graphic) is that our participants reduced theta activity when they thought they didn’t have a chance.  As one might suspect, task engagement includes a strong component of volition and brain activity should reflect the decision to give up and disengage from the task.  We’ll be following up this work to investigate how we might use the ebb and flow of frontal theta to capture and integrate task engagement into a real-time system.

Share This:

What’s in a name?

I attended a workshop earlier this year entitled aBCI (affective Brain Computer Interfaces) as part of the ACII conference in Amsterdam.  In the evening we discussed what we should call this area of research on systems that use real-time psychophysiology as an input to a computing system.  I’ve always called it ‘Physiological Computing’ but some thought this label was too vague and generic (which is a fair criticism).  Others were in favour of something that involved BCI in the title – such as Thorsten Zander‘s definitions of passive vs. active BCI.

As the debate went on, it seemed that we were discussing was an exercise in ‘branding’ as opposed to literal definition.  There’s nothing wrong with that, it’s important that nascent areas of investigation represent themselves in a way that is attractive to potential sponsors.  However, I have three main objections to the BCI label as an umbrella term for this research: (1) BCI research is identified with EEG measures, (2) BCI remains a highly specialised domain with the vast majority of research conducted on clinical groups and (3) BCI is associated with the use of psychophysiology as a substitute for input control devices.  In other words, BCI isn’t sufficiently generic to cope with: autonomic measures, real-time adaptation, muscle interfaces, health monitoring etc.

My favoured term is vague and generic, but it is very inclusive.  In my opinion, the primary obstacle facing the development of these systems is the fractured nature of the research area.  Research on these systems is multidisciplinary, involving computer science, psychology and engineering.  A number of different system concepts are out there, such as BCI vs. concepts from affective computing.  Some are intended to function as alternative forms of input control, others are designed to detect discrete psychological states.  Others use autonomic variables as opposed to EEG measures, some try to combine psychophysiology with overt changes in behaviour.  This diversity makes the area fun to work in but also makes it difficult to pin down.  At this early stage, there’s an awful lot going on and I think we need a generic label to both fully exploit synergies, and most importantly, to make sure nothing gets ruled out.

Share This:

Emotional HCI

Just read a very interesting and provocative paper entitled “How emotion is made and measured” by Kirsten Boehner and colleagues.  The paper provides a counter-argument to the perspective that emotion should be measured/quantified/objectified in HCI and used as part of an input to an affective computing system or evaluation methodology.  Instead they propose that emotion is a dynamic interaction that is socially constructed and culturally mediated.  In other words, the experience of anger is not a score of 7 on a 10-point scale that is fixed in time, but an unfolding iterative process based upon beliefs, social norms, expectations etc.

This argument seems fine in theory (to me) but difficult in practice.  I get the distinct impression the authors are addressing the way emotion may be captured as part of a HCI evaluation methodology.  But they go on to question the empirical approach in affective computing.  In this part of the paper, they choose their examples carefully.  Specifically, they focus on the category of ‘mirroring’ (see earlier post) technology wherein representations of affective states are conveyed to other humans via technology.  The really interesting idea here is that emotional categories are not given by a machine intelligence (e.g. happy vs. sad vs. angry) but generated via an interactive process.  For example, friends and colleagues provide the semantic categories used to classify the emotional state of the person.  Or literal representations of facial expression (a web-cam shot for instance) are provided alongside a text or email to give the receiver an emotional context that can be freely interpreted.  This is a very interesting approach to how an affective computing system may provide feedback to the users.  Furthermore, I think once affective computing systems are widely available, the interpretive element of the software may be adapted or adjusted via an interactive process of personalisation.

So, the system provides an affective diagnosis as a first step, which is refined and developed by the person – or even by others as time goes by.  Much like the way Amazon makes a series of recommendations based on your buying patterns that you can edit and tweak (if you have the time).

My big problem with this paper was that a very interesting debate was framed in terms of either/or position.  So, if you use psychophysiology to index emotion, you’re disregarding the experience of the individual by using objective conceptualisations of that state.  If you use self-report scales to quantify emotion, you’re rationalising an unruly process by imposing a bespoke scheme of categorisation etc.   The perspective of the paper reminded me of the tiresome debate in psychology between objective/quantitative data and subjective/qualitative data about which method delivers “the truth.”  I say ‘tiresome’ because I tend towards the perspectivist view that both approaches provide ‘windows’ on a phenomenon, both of which have advantages and disadvantages.

But it’s an interesting and provocative paper that gave me plenty to chew over.

Share This:

Neurofeedback in Education

FutureLab have published a discussion paper entitled “Neurofeedback: is there a potential for use in education?”  It’s interesting to read a report devoted to the practical uses of neurofeedback for non-clinical populations.  In short, the report covers definitions of neurofeedback & example systems (including EEG-based games like Mindball and MindFlex) as background.  Then, three potential uses of neurofeedback are considered: training for sports performance, training for artistic performance and training to treat ADHD.  The report doesn’t draw any firm conclusions as might be expected given the absence of systematic research programmes (in education).  Aside from flagging up a number of issues (intrusion, reliability, expense), it’s obvious that we don’t know how these techniques are best employed in an educational environment, i.e. how long do students need to use them? What kind of EEG changes are important?  How might neurofeedback be combined with other training techniques?

As I see it, there are a number of distinct application domains to be considered: (1) neurofeedback to shift into the desired psychological state prior to learning experience or examination (drawn from sports neurofeedback), (2) adapting educational software in real-time to keep the learner motivated (to avoid disengagement or boredom), and (3) to teach children about biological systems using biofeedback games (self-regulation exercises plus human biology practical).  I’m staying with non-clinical applications here but obviously the same approaches may be applied to ADHD.

(1) and (3) above both correspond to a traditional biofeedback paradigm where the user works with the processed biological signal to develop a degree of self-regulation, that hopefully with transfer with practice.  (2) is more interesting in my opinion; in this case, the software is being adapted in order to personalise and optimise the learning process for that particular individual.  In other words, an efficient psychological state for learning is being created in situ by dynamic software adaptation.  This approach isn’t so good for encouraging self-regulatory strategies compared to traditional biofeedback, but I believe it is more potent for optimising the learning process itself.

Share This:

Formalising the unformalisable

Research into affective computing has prompted a question from some in the HCI community about formalising the unformalisable.  This is articulated in this 2005 paper by Kirsten Boehner and colleagues.  In essence, the argument goes like this – given that emotion and cognition are embodied biopsychological phenomena, can we ever really “transmit” the experience to a computer?  Secondly, if we try to convey emotions to a computer, don’t we just trivialise the experience by converting it into another type of cold, quantified information.  Finally, hasn’t the computing community already had its fingers burned by attempts to have machines replicate cognitive phenomenon with very little results (e.g. AI research in the 80’s).

OK.  The first argument seems spurious to me.  Physiological computing or affective computing will never transmit an exact representation of private psychological events.  That’s just setting the bar too high.  What physiological computing can do is operationalise the psychological experience, i.e. to represent a psychological event or continuum in a quantified, objective fashion that should be meaningfully associated with the experience of that psychological event.  As you can see, we’re getting into deep waters already here.  The second argument is undeniable but I don’t understand why it is a criticism.  Of course we are taking an experience that is private, personal and subjective and converting it into numbers.  But that’s what the process of psychophysiological measurement is all about – moving from the realm of experience to the realm of quantified representation.  After all, if you studied an ECG trace of a person in the midst of a panic attack, you wouldn’t expect to experience a panic attack yourself, would you?  Besides, converting emotions into numbers is the only way a computer has to represent psychological status.

As for the last argument, I’m on unfamiliar ground here, but I hope the HCI community can learn from the past mistakes; specifically, being too literal and unrealistically ambitious.  Unfortunately the affective computing debate sometimes seems to run down these well-trodden paths.  I’ve read papers where researchers ponder how computers will ‘feel’ emotions or whether the whole notion of emotional computing is an oxymoron.   Getting computers to represent the psychological status of users is a relative business that needs to take a couple of baby steps before we try and run.

Share This:

CHI workshop 2005

Just to show how out of touch I am with CHI stuff, I stumbled upon a workshop entitled “evaluating affective interfaces – innovative approaches”  this afternoon.  Only 4 years after the actual event.  Here’s a link to the web page with details of all papers.

Share This:

Mobile Heart Health

There’s a short summary of a project called ‘Mobile Heart Health’ in the latest issue of IEEE Pervasive Computing (April-June 2009).  The project was conducted at Intel Labs and uses an ambulatory ECG sensor to connect to a mobile telephone.  The ECG monitors heart rate variability; if high stress is detected, the user is prompted by the phone to run through a number of relaxation therapies (controlled breathing) to provide ‘just-in-time’ stress management.  It’s an interesting project, both in conceptual terms (I imagine pervasive monitoring and stress management would be particularly useful for cardiac outpatients) and in terms of interface design (how to alert the stressed user to their stressed state without making them even more stressed).  Here’s a link to the magazine which includes a downloadable pdf of the article.

Share This: