Categories of Physiological Computing

In my last post I articulated a concern about how the name adopted by this field may drive the research in one direction or another.  I’ve adopted the Physiological Computing (PC) label because it covers the widest range of possible systems.  Whilst the PC label is broad, generic and probably vague, it does cover a lot of different possibilities without getting into the tortured semantics of categories, sub-categories and sub- sub-categories.

I’ve defined PC as a computer system that uses real-time bio-electrical activity as input data.  At one level, moving a mouse (or a Wii) with your hand represents a form of physiological computing as do physical interfaces based on gestures – as both are ultimately based on muscle potentials.  But that seems a little pedantic.  In my view, the PC concept begins with Muscle Interfaces (e.g. eye movements) where the electrical activity of muscles is translated into gestures or movements in 2D space.  Brain-Computer Interfaces (BCI) represent a second category where the electrical activity of the cortex is converted into input control.  Biofeedback represents the ‘parent’ of this category of technology and was ultimately developed as a control device, to train the user how to manipulate the autonomic nervous system.  By contrast, systems involving biocybernetic adaptation passively monitor spontaneous activity from the central nervous system and translate these signals into real-time software adaptation – most forms of affective computing fall into this category.  Finally, we have the ‘black box’ category of ambulatory recording where physiological data are continuously recorded and reviewed at some later point in time by the user or medical personnel.

I’ve tried to capture these different categories in the diagram below.  The differences between each grouping lie on a continuum from overt observable physical activity to covert changes in psychophysiology.  Some are intended to function as explicit forms of intentional communication with continuous feedback, others are implicit with little intentionality on the part of the user.  Also, there is huge overlap between the five different categories of PC: most involve a component of biofeedback and all will eventually rely on ambulatory monitoring in order to function.  What I’ve tried to do is sketch out the territory in the most inclusive way possible.  This inclusive scheme also makes hybrid systems easier to imagine, e.g. BCI + biocybernetic adaptation, muscle interface + BCI – basically we have systems (2) and (3) designed as input control, either of which may be combined with (5) because it operates in a different way and at a different level of the HCI.

As usual, all comments welcome.

Five Categories of Physiological Computing

Five Categories of Physiological Computing

Share This:

What’s in a name?

I attended a workshop earlier this year entitled aBCI (affective Brain Computer Interfaces) as part of the ACII conference in Amsterdam.  In the evening we discussed what we should call this area of research on systems that use real-time psychophysiology as an input to a computing system.  I’ve always called it ‘Physiological Computing’ but some thought this label was too vague and generic (which is a fair criticism).  Others were in favour of something that involved BCI in the title – such as Thorsten Zander‘s definitions of passive vs. active BCI.

As the debate went on, it seemed that we were discussing was an exercise in ‘branding’ as opposed to literal definition.  There’s nothing wrong with that, it’s important that nascent areas of investigation represent themselves in a way that is attractive to potential sponsors.  However, I have three main objections to the BCI label as an umbrella term for this research: (1) BCI research is identified with EEG measures, (2) BCI remains a highly specialised domain with the vast majority of research conducted on clinical groups and (3) BCI is associated with the use of psychophysiology as a substitute for input control devices.  In other words, BCI isn’t sufficiently generic to cope with: autonomic measures, real-time adaptation, muscle interfaces, health monitoring etc.

My favoured term is vague and generic, but it is very inclusive.  In my opinion, the primary obstacle facing the development of these systems is the fractured nature of the research area.  Research on these systems is multidisciplinary, involving computer science, psychology and engineering.  A number of different system concepts are out there, such as BCI vs. concepts from affective computing.  Some are intended to function as alternative forms of input control, others are designed to detect discrete psychological states.  Others use autonomic variables as opposed to EEG measures, some try to combine psychophysiology with overt changes in behaviour.  This diversity makes the area fun to work in but also makes it difficult to pin down.  At this early stage, there’s an awful lot going on and I think we need a generic label to both fully exploit synergies, and most importantly, to make sure nothing gets ruled out.

Share This:

BIOSTEC 2010

A late addition to the conference list is BIOSIGNALS2010 – 3rd International Conference on Bio-Inspired Systems and Signal Processing to be held in Valencia in January 2010. This conference includes sessions on: signal processing, wearable sensors and user interface. Full details here

Share This:

life logging + body blogging

This article in New Scientist prompts a short follow-up to my posts on body-blogging. The article describes a camera worn around the neck that takes a photograph every 30sec. The potential for this device to help people suffering from dementia and related problems is huge. At perhaps a more trivial level, the camera would be a useful addition to wearable physiological sensors (see previous posts on quantifying the self). If physiological data could be captured and averaged over 30 sec intervals, these data could be paired with a still image and presented as a visual timeline. This would save the body blogger from having to manually tag everything; the image also provides a nice visual recall prompt for memory and the person can speculate on how their location/activity/interactions caused changes in the body. Of course it would work as a great tool for research also – particularly for stress research in the field.

Share This:

Brain, Body and Bytes Workshop: CHI 2010

A workshop has been organised as part of CHI 2010 in Atlanta entitled “Brain, Body and Bytes”. Details are here. The same organisers have also set up a facebook group.

Share This:

quantifying the self (again)

I just watched this cool presentation about blogging self-report data on mood/lifestyle and looking at the relationship with health. My interest in this topic is tied up in the concept of body-blogging (i.e. recording physiological data using ambulatory systems) – see earlier post. What’s nice about the idea of body-blogging is that it’s implicit and doesn’t require you to do anything extra, such as completing mood ratings or other self-reports. The fairly major downside to this approach comes in two varieties: (1) the technology to do it easily is still fairly expensive and associated software is cumbersome to use (not that it’s bad software, it’s just designed for medical or research purposes), and (2) continuous physiology generates a huge amount of data.

For the individual, this concept of self-tracking and self-quantifying is linked to increased self-awareness (to learn how your body is influenced by everyday events), and with self-awareness comes new strategies for self-regulation to minimise negative or harmful changes. My feeling is that there are certain times in your life (e.g. following a serious illness or medical procedure) when we have a strong motivation to quantify and monitor our physiological patterns. However, I see a risk of that strategy tipping a person over into hypochondria if they feel particularly vulnerable.

At the level of the group, it’s fascinating to see the seeds of a crowdsourcing idea in the above presentation. Therefore, people self-log over a period and share this information anonymously on the web. This activity creates a database that everyone can access and analyse, participants and researchers alike. I wonder if people would be as comfortable sharing heart rate or blood pressure data – provided it was submitted anonymously, I don’t see why not.

There’s enormous potential here for wearable physiological sensors to be combined with self-reported logging and both data sets to be combined online. Obviously there is a fidelity mismatch here; physiological data can be recorded in milliseconds whilst self-report data is recorded in hours. But some clever software could be constructed in order to aggregate the physiology and put both data-sets on the same time frame. The benefit of doing this for both researcher and participant is to explore the connections between (previously) unseen patterns of physiological responses and the experience of the individual/group/population.

For anyone who’s interested, here’s a link to another blog site containing a report from an event that focused on self-tracking technologies.

Share This:

Audience Participation

A paper just published in IJHCS by Stevens et al (link to abstract) describes how members of the audience use a PDA to register their emotional responses in real-time during a number of dance performances.    It’s an interesting approach to studying how emotional responses may converge and diverge during particular sections of a performance.  The PDA displays a two-dimensional space with valence and activation representing emotion (i.e. Russell’s circumplex model).  The participants were required to indicate their position within this space with a stylus at rate of two readings per second!

That sounds like a lot of work, so how about a physiological computing version where valence and activation are operationalised with real-time psychophysiology, e.g. a corrugator/zygomaticus reading for valence and blood pressure/GSR/heart rate for activation.  Provided that the person remained fairly stationary, it could deliver the same kind of data with a higher level of fidelity and without the onerous requirement to do self-reports.

This system concept could really take off if you had 100s of audience members wired up for a theatre performance and live feedback of the ‘hive’ emotion represented on stage.  This could be a backdrop projection or colour/intensity of stage lighting working as an en-masse biofeedback system.  A clever installation could allow the performers to interact with the emotional representation of the audience – to check out the audience response or coerce certain responses.

Or perhaps this has already been done somewhere and I missed it.

Share This:

Emotional HCI

Just read a very interesting and provocative paper entitled “How emotion is made and measured” by Kirsten Boehner and colleagues.  The paper provides a counter-argument to the perspective that emotion should be measured/quantified/objectified in HCI and used as part of an input to an affective computing system or evaluation methodology.  Instead they propose that emotion is a dynamic interaction that is socially constructed and culturally mediated.  In other words, the experience of anger is not a score of 7 on a 10-point scale that is fixed in time, but an unfolding iterative process based upon beliefs, social norms, expectations etc.

This argument seems fine in theory (to me) but difficult in practice.  I get the distinct impression the authors are addressing the way emotion may be captured as part of a HCI evaluation methodology.  But they go on to question the empirical approach in affective computing.  In this part of the paper, they choose their examples carefully.  Specifically, they focus on the category of ‘mirroring’ (see earlier post) technology wherein representations of affective states are conveyed to other humans via technology.  The really interesting idea here is that emotional categories are not given by a machine intelligence (e.g. happy vs. sad vs. angry) but generated via an interactive process.  For example, friends and colleagues provide the semantic categories used to classify the emotional state of the person.  Or literal representations of facial expression (a web-cam shot for instance) are provided alongside a text or email to give the receiver an emotional context that can be freely interpreted.  This is a very interesting approach to how an affective computing system may provide feedback to the users.  Furthermore, I think once affective computing systems are widely available, the interpretive element of the software may be adapted or adjusted via an interactive process of personalisation.

So, the system provides an affective diagnosis as a first step, which is refined and developed by the person – or even by others as time goes by.  Much like the way Amazon makes a series of recommendations based on your buying patterns that you can edit and tweak (if you have the time).

My big problem with this paper was that a very interesting debate was framed in terms of either/or position.  So, if you use psychophysiology to index emotion, you’re disregarding the experience of the individual by using objective conceptualisations of that state.  If you use self-report scales to quantify emotion, you’re rationalising an unruly process by imposing a bespoke scheme of categorisation etc.   The perspective of the paper reminded me of the tiresome debate in psychology between objective/quantitative data and subjective/qualitative data about which method delivers “the truth.”  I say ‘tiresome’ because I tend towards the perspectivist view that both approaches provide ‘windows’ on a phenomenon, both of which have advantages and disadvantages.

But it’s an interesting and provocative paper that gave me plenty to chew over.

Share This:

Body blogging

I’ve just returned from a summer school on pervasive adaptation organised under the PERADA project.  As preparation for my talk, I was asked to identify some future applications for physiological computing.  I drew from an idea first articulated by Ros Picard that exposure to quantifiable, objective feedback about emotional states could serve an educational purpose – to aid awareness and self-regulation.  Thinking about a future time when wearable sensors are standard and wirelessly connected to phones/PDAs/laptops, I came up with the idea of body blogging.  The basic notion here is that you can review a physiological data set collected over a period of time, perhaps synchronised with a diary, and identify trends that might be of interest.

The big changes, such as sleep/wake cycles, are sort of interesting (did you really have a bad night’s sleep?).  If you take regular exercise, you might like to know how your body responded to that session at the gym or how many calories you burned during a run.  Changes in physiology that relate to health, such as blood pressure, would be very interesting because hypertension tends to be essentially symptom-free, so the technology is providing a window on a hidden aspect of life.  Perhaps I’m a little too curious about this stuff, but I’d like to know what kind of activities or contact with people tended to increase physiological markers of stress.

The central concept is to use a monitoring technology as a tool to extend self-awareness and to make changes (in lifestyle or attitude) that counteract those negative influences that are part-and-parcel of everyday life.  When I proudly presented the idea, it struck me as a little “niche” and perhaps a little strange – an impression confirmed by the general apathy of the audience.  On the next day, I checked my RSS to Wired and came across this article by Gary Wolf who obviously has thought much more about this kind of stuff than me.  He even runs a blog in conjunction with Kevin Kelly dedicated to the topic.  Encouraged by this apparent serendipity, I brought up the prospect of body blogging again during my second talk of the summer school – but my audience remained distinctly underwhelmed, even though  I sensed a small number thought the term ‘body blogging’ was neat.

As part of the health psychology module I teach, I’ve come across research on allostatic load (AL).  This is a concept from stress research developed by Bruce McEwen among others; in essence, AL represents the temporal characteristics of how the body responds to a stressor (i.e. the magnitude of the response, recovery time).  As you may imagine, high stress reactivity with a slow recovery rate is bad for health.  In fact, McEwen and Seeman linked AL to the concept of biological aging – people with higher AL have bodies that age at a faster rate than their chronological rate (and tend to suffer from poor health as a direct consequence).  Here’s an article explaining the application of this approach to the effects of socioeconomic status on health.  There are several markers of AL including: blood pressure, hip:waist ratio, the hormone cortisol, ratio of high-to-low density lipids (see previous link for more examples).

Which is an extremely long-winded way of wondering if body blogging could help people to track their AL and biological age – and to allow them to develop strategies and habits that minimise the impact of everyday stress on health.  The current conception of AL relies heavily on measures taken from plasma samples, so perhaps that is a limiting factor.  On the other hand, one problem with trying to sustain healthy lifestyle choices is the absence of clear, unequivocable feedback – so perhaps there is some hope for the concept of body blogging after all.

Share This:

Overt vs. Covert Expression

This article in New Scientist on Project Natal got me thinking about the pros and cons of monitoring overt expression via sophisticated cameras and covert expression of psychological states via psychophysiology.  The great thing about the depth-sensing cameras (summarised nicely by one commentator in the article as like having a Wii attached to each foot, hand and your hand) is that: (1) it’s wireless technology, (2) interactions are naturalistic, and (3) it’s potentially robust (provided nobody else walks into the camera view).  Also, because it captures overt expression of body position/posture or changes in facial expression/voice tone (the second being muted as a phase two development), it measuring those signs and signals that people are usually happy to share their fellow humans – so the feel of the interaction should be as naturalistic as a regular discourse.

So why bother monitoring psychophysiology in real time to represent the user?  Let’s face it – there are big question marks over its reliability, it’s largely unproven in the field and normally involves attaching wires to the person – even if they are wearable.

But to view a  face-off between the two approaches in terms of sensor technology is missing the point.  The purpose of depth cameras is to give computer technology a set of eyes and ears to perceive & respond to overt visual or vocal cues from the user.  Whilst psychophysiological methods have been developed to capture covert changes that remain invisible to the eye.  For example, a camera system may detect a frown in response to an annoying email whereas a facial EMG recording will often detect increased activity from the corrugator or frontalis (i.e. the frown muscles) regardless of any change on the person’s face.

One approach is geared up to the detection of visible cues whereas the physiological computing approach is concerned with invisible changes in brain activity, muscle tension and autonomic activity.  That last sentence makes the physiological approach sound superior, doesn’t it?  But the truth is that both approaches do different things, and the question of which one is best depends largely on what kind of system you’re trying to build.  For example, if I’m building an application to detect high levels of frustration in response to shoot-em-up gameplay, perhaps overt behavioural cues (facial expression, vocal changes, postural changes) will detect that extreme state.  On the other hand, if my system needed to resolve low vs. medium vs. high vs. critical levels of frustration, I’d have more confidence in psychophysiological measures to provide the necessary level of fidelity.

Of course both approaches aren’t mutually exclusive and it’s easy to imagine naturalistic input control going hand-in-hand with real-time system adaptation based on psychophysiological measures.

But that’s the next step – Project Natal and similar systems will allow us to interact using naturalistic gestures, and to an extent, to construct a representation of user state based on overt behavioural cues.  In hindsight, it’s logical (sort of) that we begin on this road by extending the awareness of a computer system in a way that mimics our own perceptual apparatus.  If we supplement that technology by granting the system access to subtle, covert changes in physiology, who knows what technical possibilities will open up?

Share This: