Mobile Monitors and Apps for Physiological Computing

I always harbored two assumptions about the development of physiological computing systems that have only become apparent (to me at least) as technological innovation seems to contradict them.  First of all, I thought nascent forms of physiological computing systems would be developed for desktop system where the user stays in a stationary and more-or-less sedentary position, thus minimising the probability of movement artifacts.  Also, I assumed that physiological computing devices would only ever be achieved as coordinated holistic systems.  In other words, specific sensors linked to a dedicated controller that provides input to adaptive software, all designed as a seamless chain of information flow.

Continue reading

Share This:

iBrain

I just watched a TEDMED talk about the iBrain device via this link on the excellent Medgadget resource.  The iBrain is a single-channel EEG recording collected via ‘dry’ electrodes where the data is stored in a conventional handheld device such as a cellphone.  In my opinion, the clever part of this technology is the application of mathematics to wring detailed information out of  a limited data set – it’s a very efficient strategy.

The hardware looks to be fairly standard – a wireless EEG link to a mobile device.  But its simplicity provides an indication of where this kind of physiological computing application could be going in the future – mobile monitoring for early detection of medical problems piggy-backing onto conventional technology.  If physiological computing applications become widespread, this kind of proactive medical monitoring could become standard.  And the main barrier to that is non-intrusive, non-medicalised sensor development.

In the meantime, Neurovigil, the company behind the product, recently announced a partnership with Swiss pharmaceutical giants Roche who want to apply this technology to clinical drug trials.  I guess the methodology focuses the drug companies to consider covert changes in physiology as a sensitive marker of drug efficacy or side-effects.

I like the simplicity of the iBrain (1 channel of EEG) but speaker make some big claims for their analysis, the implicit ones deal with the potential of EEG to identify neuropathologies.  That may be possible but I’m sceptical about whether 1 channel is sufficient.  The company have obviously applied their pared-down analysis to sleep stages with some success but I was left wondering what added value the device provided compared to less-intrusive movement sensors used to analyse sleep behaviour, e.g. the Actiwatch

Share This:

Heart Chamber Orchestra

I came across this article about the Heart Chamber Orchestra on the Wired site last week.  The Orchestra are a group of musicians who wear ECG monitors whilst they play – the signals from the ECG feed directly into laptops and adapts the musical scores played directly and in real-time.  They also have some nice graphics generated by the ECG running in the background when they play (see clip below).  What I think is really interesting about this project is the reflexive loop set up between the ECG, the musician’s response and the adaptation of the musical score.  It really goes beyond standard biofeedback – a live feed from the ECG mutates the musical score, the player responds to technical/emotional qualities of that score, which has a second-order effect on the ECG and so on.  In the Wired article, they refer to the possibility of the audience being equipped with ECG monitors to provide another input to the loop – which is truly a mind-boggling possibility in terms of a fully-functioning biocybernetic loop.

The thing I find slightly frustrating about the article and the information contained in the project website is the lack of information about how the ECG influences the musical score.  In a straightforward way, an ECG will yield a beat-to-beat interval, which of course could generate a metronomic beat if averaged over the group.  Alternatively each individual ECG could generate its own beat, which could be superimposed over one another.  But there are dozens of ways in which ECG information could be used to adapt a musical score in a real-time.  According to the project information, there is also a composer involved doing some live manipulations of the score, but it’s hard to figure out how much of the real-time transformation is coming from him or her and how much is directly from the ECG signal.

I should also say that the Orchestra are currently competing for the FILE PRIX LUX prize and you can vote for them here

Before you do, you might want to see the orchestra in action in the clip below.

Heart chamber orchestra on vimeo

Share This:

Overt vs. Covert Expression

This article in New Scientist on Project Natal got me thinking about the pros and cons of monitoring overt expression via sophisticated cameras and covert expression of psychological states via psychophysiology.  The great thing about the depth-sensing cameras (summarised nicely by one commentator in the article as like having a Wii attached to each foot, hand and your hand) is that: (1) it’s wireless technology, (2) interactions are naturalistic, and (3) it’s potentially robust (provided nobody else walks into the camera view).  Also, because it captures overt expression of body position/posture or changes in facial expression/voice tone (the second being muted as a phase two development), it measuring those signs and signals that people are usually happy to share their fellow humans – so the feel of the interaction should be as naturalistic as a regular discourse.

So why bother monitoring psychophysiology in real time to represent the user?  Let’s face it – there are big question marks over its reliability, it’s largely unproven in the field and normally involves attaching wires to the person – even if they are wearable.

But to view a  face-off between the two approaches in terms of sensor technology is missing the point.  The purpose of depth cameras is to give computer technology a set of eyes and ears to perceive & respond to overt visual or vocal cues from the user.  Whilst psychophysiological methods have been developed to capture covert changes that remain invisible to the eye.  For example, a camera system may detect a frown in response to an annoying email whereas a facial EMG recording will often detect increased activity from the corrugator or frontalis (i.e. the frown muscles) regardless of any change on the person’s face.

One approach is geared up to the detection of visible cues whereas the physiological computing approach is concerned with invisible changes in brain activity, muscle tension and autonomic activity.  That last sentence makes the physiological approach sound superior, doesn’t it?  But the truth is that both approaches do different things, and the question of which one is best depends largely on what kind of system you’re trying to build.  For example, if I’m building an application to detect high levels of frustration in response to shoot-em-up gameplay, perhaps overt behavioural cues (facial expression, vocal changes, postural changes) will detect that extreme state.  On the other hand, if my system needed to resolve low vs. medium vs. high vs. critical levels of frustration, I’d have more confidence in psychophysiological measures to provide the necessary level of fidelity.

Of course both approaches aren’t mutually exclusive and it’s easy to imagine naturalistic input control going hand-in-hand with real-time system adaptation based on psychophysiological measures.

But that’s the next step – Project Natal and similar systems will allow us to interact using naturalistic gestures, and to an extent, to construct a representation of user state based on overt behavioural cues.  In hindsight, it’s logical (sort of) that we begin on this road by extending the awareness of a computer system in a way that mimics our own perceptual apparatus.  If we supplement that technology by granting the system access to subtle, covert changes in physiology, who knows what technical possibilities will open up?

Share This: