Accuracy is fundamental to the process of scientific measurement, we expect our gizmos and sensors to deliver data that is both robust and precise. If accurate data are available, reliable inferences can be made about whatever you happen to be measuring, these inferences inform understanding and prediction of future events. But absence of accuracy is disastrous, if we cannot trust the data then the rug is pulled out from under the scientific method.
Having worked as a psychophysiologist for longer than I care to remember, I’m acutely aware of this particular house of cards. Even if your ECG or SCL sensor is working perfectly, there are always artefacts that can affect data in a profound way: this participant had a double-espresso before they came to the lab, another is persistently and repeatedly scratching their nose. Psychophysiologists have to pay attention to data quality because the act of psychophysiological inference is far from straightforward*. In a laboratory where conditions are carefully controlled, these unwelcome interventions from the real-world are handled by a double strategy – first of all, participants are asked to sit still and refrain from excessive caffeine consumption etc., and if that doesn’t work, we can remove the artefacts from the data record by employing various forms of post-hoc analyses.
Working with physiological measures under real-world conditions, where people can drink coffee and dance around the table if they wish, presents a significant challenge for all the reasons just mentioned. So, why would anyone even want to do it? For the applied researcher, it’s a risk worth taking in order to get a genuine snapshot of human behaviour away from the artificialities of the laboratory. For people like myself, who are interested in physiological computing and using these data as inputs to technological systems, the challenge of accurate data capture in the real world is a fundamental issue. People don’t use technology in a laboratory, they use it out there in offices and cars and cafes and trains – and if we can’t get physiological computing systems to work ‘out there’ then one must question whether this form of technology is really feasible.
Last week I attended the first international conference on physiological computing held in Lisbon. Before commenting on the conference, it should be noted that I was one of the program co-chairs, so I am not completely objective – but as this was something of a watershed event for research in this area, I didn’t want to let the conference pass without comment on the blog.
The conference lasted for two-and-a-half days and included four keynote speakers. It was a relatively small meeting with respect to the number of delegates – but that is to be expected from a fledgling conference in an area that is somewhat niche with respect to methodology but very broad in terms of potential applications.
In last week’s excellent Bad Science article from The Guardian, Ben Goldacre puts his finger on a topic that I think is particularly relevant for physiological computing systems. He quotes press reports about MRI research into “hypoactive sexual desire response” – no, I hadn’t heard of it either, it’s a condition where the person has low libido. In this study women with the condition and ‘normals’ viewed erotic imagery in the scanner. A full article on the study from the Mail can be found here but what caught the attention of Bad Science is this interesting quote from one of the researchers involved: “Being able to identify physiological changes, to me provides significant evidence that it’s a true disorder as opposed to a societal construct.”
I always harbored two assumptions about the development of physiological computing systems that have only become apparent (to me at least) as technological innovation seems to contradict them. First of all, I thought nascent forms of physiological computing systems would be developed for desktop system where the user stays in a stationary and more-or-less sedentary position, thus minimising the probability of movement artifacts. Also, I assumed that physiological computing devices would only ever be achieved as coordinated holistic systems. In other words, specific sensors linked to a dedicated controller that provides input to adaptive software, all designed as a seamless chain of information flow.
I just watched a TEDMED talk about the iBrain device via this link on the excellent Medgadget resource. The iBrain is a single-channel EEG recording collected via ‘dry’ electrodes where the data is stored in a conventional handheld device such as a cellphone. In my opinion, the clever part of this technology is the application of mathematics to wring detailed information out of a limited data set – it’s a very efficient strategy.
The hardware looks to be fairly standard – a wireless EEG link to a mobile device. But its simplicity provides an indication of where this kind of physiological computing application could be going in the future – mobile monitoring for early detection of medical problems piggy-backing onto conventional technology. If physiological computing applications become widespread, this kind of proactive medical monitoring could become standard. And the main barrier to that is non-intrusive, non-medicalised sensor development.
In the meantime, Neurovigil, the company behind the product, recently announced a partnership with Swiss pharmaceutical giants Roche who want to apply this technology to clinical drug trials. I guess the methodology focuses the drug companies to consider covert changes in physiology as a sensitive marker of drug efficacy or side-effects.
I like the simplicity of the iBrain (1 channel of EEG) but speaker make some big claims for their analysis, the implicit ones deal with the potential of EEG to identify neuropathologies. That may be possible but I’m sceptical about whether 1 channel is sufficient. The company have obviously applied their pared-down analysis to sleep stages with some success but I was left wondering what added value the device provided compared to less-intrusive movement sensors used to analyse sleep behaviour, e.g. the Actiwatch
In my last post I articulated a concern about how the name adopted by this field may drive the research in one direction or another. I’ve adopted the Physiological Computing (PC) label because it covers the widest range of possible systems. Whilst the PC label is broad, generic and probably vague, it does cover a lot of different possibilities without getting into the tortured semantics of categories, sub-categories and sub- sub-categories.
I’ve defined PC as a computer system that uses real-time bio-electrical activity as input data. At one level, moving a mouse (or a Wii) with your hand represents a form of physiological computing as do physical interfaces based on gestures – as both are ultimately based on muscle potentials. But that seems a little pedantic. In my view, the PC concept begins with Muscle Interfaces (e.g. eye movements) where the electrical activity of muscles is translated into gestures or movements in 2D space. Brain-Computer Interfaces (BCI) represent a second category where the electrical activity of the cortex is converted into input control. Biofeedback represents the ‘parent’ of this category of technology and was ultimately developed as a control device, to train the user how to manipulate the autonomic nervous system. By contrast, systems involving biocybernetic adaptation passively monitor spontaneous activity from the central nervous system and translate these signals into real-time software adaptation – most forms of affective computing fall into this category. Finally, we have the ‘black box’ category of ambulatory recording where physiological data are continuously recorded and reviewed at some later point in time by the user or medical personnel.
I’ve tried to capture these different categories in the diagram below. The differences between each grouping lie on a continuum from overt observable physical activity to covert changes in psychophysiology. Some are intended to function as explicit forms of intentional communication with continuous feedback, others are implicit with little intentionality on the part of the user. Also, there is huge overlap between the five different categories of PC: most involve a component of biofeedback and all will eventually rely on ambulatory monitoring in order to function. What I’ve tried to do is sketch out the territory in the most inclusive way possible. This inclusive scheme also makes hybrid systems easier to imagine, e.g. BCI + biocybernetic adaptation, muscle interface + BCI – basically we have systems (2) and (3) designed as input control, either of which may be combined with (5) because it operates in a different way and at a different level of the HCI.
As usual, all comments welcome.
Five Categories of Physiological Computing
I’ve just returned from a summer school on pervasive adaptation organised under the PERADA project. As preparation for my talk, I was asked to identify some future applications for physiological computing. I drew from an idea first articulated by Ros Picard that exposure to quantifiable, objective feedback about emotional states could serve an educational purpose – to aid awareness and self-regulation. Thinking about a future time when wearable sensors are standard and wirelessly connected to phones/PDAs/laptops, I came up with the idea of body blogging. The basic notion here is that you can review a physiological data set collected over a period of time, perhaps synchronised with a diary, and identify trends that might be of interest.
The big changes, such as sleep/wake cycles, are sort of interesting (did you really have a bad night’s sleep?). If you take regular exercise, you might like to know how your body responded to that session at the gym or how many calories you burned during a run. Changes in physiology that relate to health, such as blood pressure, would be very interesting because hypertension tends to be essentially symptom-free, so the technology is providing a window on a hidden aspect of life. Perhaps I’m a little too curious about this stuff, but I’d like to know what kind of activities or contact with people tended to increase physiological markers of stress.
The central concept is to use a monitoring technology as a tool to extend self-awareness and to make changes (in lifestyle or attitude) that counteract those negative influences that are part-and-parcel of everyday life. When I proudly presented the idea, it struck me as a little “niche” and perhaps a little strange – an impression confirmed by the general apathy of the audience. On the next day, I checked my RSS to Wired and came across this article by Gary Wolf who obviously has thought much more about this kind of stuff than me. He even runs a blog in conjunction with Kevin Kelly dedicated to the topic. Encouraged by this apparent serendipity, I brought up the prospect of body blogging again during my second talk of the summer school – but my audience remained distinctly underwhelmed, even though I sensed a small number thought the term ‘body blogging’ was neat.
As part of the health psychology module I teach, I’ve come across research on allostatic load (AL). This is a concept from stress research developed by Bruce McEwen among others; in essence, AL represents the temporal characteristics of how the body responds to a stressor (i.e. the magnitude of the response, recovery time). As you may imagine, high stress reactivity with a slow recovery rate is bad for health. In fact, McEwen and Seeman linked AL to the concept of biological aging – people with higher AL have bodies that age at a faster rate than their chronological rate (and tend to suffer from poor health as a direct consequence). Here’s an article explaining the application of this approach to the effects of socioeconomic status on health. There are several markers of AL including: blood pressure, hip:waist ratio, the hormone cortisol, ratio of high-to-low density lipids (see previous link for more examples).
Which is an extremely long-winded way of wondering if body blogging could help people to track their AL and biological age – and to allow them to develop strategies and habits that minimise the impact of everyday stress on health. The current conception of AL relies heavily on measures taken from plasma samples, so perhaps that is a limiting factor. On the other hand, one problem with trying to sustain healthy lifestyle choices is the absence of clear, unequivocable feedback – so perhaps there is some hope for the concept of body blogging after all.
There’s a short summary of a project called ‘Mobile Heart Health’ in the latest issue of IEEE Pervasive Computing (April-June 2009). The project was conducted at Intel Labs and uses an ambulatory ECG sensor to connect to a mobile telephone. The ECG monitors heart rate variability; if high stress is detected, the user is prompted by the phone to run through a number of relaxation therapies (controlled breathing) to provide ‘just-in-time’ stress management. It’s an interesting project, both in conceptual terms (I imagine pervasive monitoring and stress management would be particularly useful for cardiac outpatients) and in terms of interface design (how to alert the stressed user to their stressed state without making them even more stressed). Here’s a link to the magazine which includes a downloadable pdf of the article.