Special Issue of Interacting with Computers



I am one of the co-editors of a special issue of the Interacting With Computers, which is now available online here.   The title for the special issue is Physiological Computing for Intelligent Adaptation, it contains five full research papers covering a range of topics such as:  use of VR for stress reduction, mental workload monitoring and a comparison of EEG headsets.

Share This:

Neurofeedback and the Attentive Brain


The act of paying attention or sustaining concentration is a good example of everyday cognition.  We all know the difference between an attentive state of being, when we are utterly focused and seem to absorb every ‘bit’ of information, and the diffuse experience of mind-wandering where consciousness flits from one random topic to the next.  Understanding this distinction is easy but the act of regulating the focus of attention can be a real challenge, especially if you didn’t get enough sleep or you’re not particularly interested in the task at hand.  Ironically if you are totally immersed in a task, attention is absorbed to the extent that you don’t notice your clarity of focus.  At the other extreme, if you begin to day-dream, registering any awareness of your inattentive state is very unlikely.

The capacity to self-regulate attentional focus is an important skill for many people, from the executives who sit in long meetings where important decisions are made to air traffic controllers, pilots, truck drivers and other professionals for whom the ability to concentrate has real consequences for the safety of themselves and others.

Technology can play a role in developing the capacity to regulate attentional focus.  The original biocybernetic loop developed at NASA was an example of how to incorporate a neurofeedback mechanism into the cockpit in order to ensure a level of awareness that was conducive with safe performance.  There are two components within type of system: real-time analysis of brain activity as a proxy of attention and translation of these data into ‘live’ feedback to the user.  The availability of explicit, real-time feedback on attentional state acts as an error signal to indicate the loss of concentration.

This article will tell a tale of two cultures, an academic paper that updates biocybernetic control of attention via real-time fMRI and a kickstarter project where the loop is encapsulated within a wearable device.

Continue reading

Share This:

We Need To Talk About Clippy

Everyone who used MS Office between 1997 and 2003 remembers Clippy.  He was a help avatar designed to interact with the user in a way that was both personable and predictive.  He was a friendly sales assistant combined with a butler who anticipated all your needs.  At least, that was the idea.  In reality, Clippy fell well short of those expectations, he was probably the most loathed feature of those particular operating systems; he even featured in this Time Magazine list of world’s worst inventions, a list that also includes Agent Orange and the Segway.

In an ideal world, Clippy would have responded to user behaviour in ways that were intuitive, timely and helpful.  In reality, his functionality was limited, his appearance often intrusive and his intuition was way off. Clippy irritated so completely that his legacy lives on over ten years later.  If you describe the concept of an intelligent adaptive interface to most people, half of them recall the dreadful experience of Clippy and the rest will probably be thinking about HAL from 2001: A Space Odyssey.  With those kinds of role models, it’s not difficult to understand why users are in no great hurry to embrace intelligent adaptation at the interface.

In the years since Clippy passed, the debate around machine intelligence has placed greater emphasis on the improvisational spark that is fundamental to displays of human intellect.  This recent article in MIT Technology Review makes the point that a “conversation” with Eugene Goostman (the chatter bot who won a Turing Test competition at Bletchley Park in 2012) lacks the natural “back and forth” of human-human communication.  Modern expectations of machine intelligence go beyond a simple imitation game within highly-structured rules, users are looking for a level of spontaneity and nuance that resonates with their human sense of what other people are.

But one of the biggest problems with Clippy was not simply intrusiveness but the fact that his repertoire of responses was very constrained, he could ask if you were writing a letter (remember those?) and precious little else.

Continue reading

Share This:

Can Physiological Computing Create Smart Technology?


The phrase “smart technology” has been around for a long time.  We have smart phones and smart televisions with functional capability that is massively enhanced by internet connectivity.  We also talk about smart homes that scale up into smart cities.  This hybrid between technology and the built environment promotes connectivity but with an additional twist – smart spaces monitor activity within their confines for the purposes of intelligent adaptation: to switch off lighting and heating if a space is uninhabited, to direct music from room to room as the inhabitant wanders through the house.

If smart technology is equated with enhanced connectivity and functionality, do those things translate into an increase of machine intelligence?  In his 2007 book ‘The Design Of Future Things‘, Donald Norman defined the ‘smartness’ of technology with respect to the way in which it interacted with the human user.  Inspired by J.C.R. Licklider’s (1960) definition of man-computer symbiosis, he claimed that smart technology was characterised by a harmonious partnership between person and machine.  Hence, the ‘smartness’ of technology is defined by the way in which it responds to the user and vice versa.

One prerequisite for a relationship between person and machine that is  cooperative and compatible is to enhance the capacity of technology to monitor user behaviour.  Like any good butler, the machine needs to increase its  awareness and understanding of user behaviour and user needs.  The knowledge gained via this process can subsequently be deployed to create intelligent forms of software adaptation, i.e. machine-initiated responses that are both timely and intuitive from a human perspective.  This upgraded form of  human-computer interaction is attractive to technology providers and their customers, but is it realistic and achievable and what practical obstacles must be overcome?

Continue reading

Share This:

What’s The Deal With Brain-to-Brain Interfaces?


When I first heard the term ‘brain-to-brain interfaces’, my knee-jerk response was – don’t we already have those?  Didn’t we used to call them people?  But sarcasm aside, it was clear that a new variety of BCI technology had arrived, complete with its own corporate acronym ‘B2B.’

For those new to the topic, brain-to-brain interfaces represent an amalgamation of two existing technologies.  Input is represented by volitional changes in the EEG activity of the ‘sender’ as would be the case for any type of ‘active’ BCI.  This signal is converted into an input signal for a robotised version of transcrannial magnetic stimulation (TMS) placed at a strategic location on the head of the ‘receiver.’

TMS works by discharging an electrical current in brief pulses via a stimulating coil.  These pulses create a magnetic field that induces an electrical current in the surface of the cortex that is sufficiently strong to induce neuronal depolarisation.  Because activity in the brain beneath the coil is directly modulated by this current, TMS is capable of inducing specific types of sensory phenomena or behaviour.  You can find an introduction to TMS here (it’s an old pdf but freely available).

A couple of papers were published in PLOS One at the end of last year describing two distinct types of brain-to-brain interface between humans.

Continue reading

Share This:

Forum for the Community for Passive BCI

A quick post to alert people to the first forum for the Community for Passive BCI Research that take place from the 16th to the 18th of July at the Hanse Institute for Advanced Study in Delmenhorst, near Bremen, Germany.  This event is being organised by Thorsten Zander from the Berlin Institute of Technology.

The main aim of the forum in his own words “is to connect researchers in this young field and to give them a platform to share their motivations and intentions. Therefore, the focus will not be primarily set on the presentation of new scientific results, but on the discussion of current and future directions and the possibilities to shape the community.”

Continue reading

Share This:

Book Announcement – Advances in Physiological Computing

It was way back in 2011 during our CHI workshop that we first discussed the possibility of putting together an edited collection for Springer on the topic of physiological computing.  It was clear to me at that time that many people associated physiological computing with implicit monitoring as opposed the active control that characterised BCI.  When we had the opportunity to put together a collection, one idea was to extend the scope of physiological computing to include all technologies where signals from the brain and the body were used as a form of input.  Some may interpret this relabelling of physiological computing as an all-inclusive strategy as a provocative move.  But we did not take this option as a conceptual ‘land-grab’ but rather an attempt to be as inclusive as possible and to bring together what I still perceive to be a rather disparate and fractured research community.  After all, we are all using psychophysiology in one form or another and share a common interest in sensor design, interaction mechanics and real-time measurement.

The resulting book is finally close to publication (tentative date: 4th April 2014) and you can follow this link to get the full details.  We’re pleased to have a wide range of contributions on an array of technologies, from eye input to digital memories via mental workload monitoring, implicit interaction, robotics, biofeedback and cultural heritage.  Thanks to all our contributors and the staff at Springer who helped us along the way.


Share This:

Reflections on first International Conference on Physiological Computing Systems


Last week I attended the first international conference on physiological computing held in Lisbon.  Before commenting on the conference, it should be noted that I was one of the program co-chairs, so I am not completely objective – but as this was something of a watershed event for research in this area, I didn’t want to let the conference pass without comment on the blog.

The conference lasted for two-and-a-half days and included four keynote speakers.  It was a relatively small meeting with respect to the number of delegates – but that is to be expected from a fledgling conference in an area that is somewhat niche with respect to methodology but very broad in terms of potential applications.

Continue reading

Share This:

What kind of Meaningful Interaction would you like to have? Pt 1

A couple of years ago we organised this CHI workshop on meaningful interaction in physiological computing.  As much as I felt this was an important area for investigation, I also found the topic very hard to get a handle on.  I recently revisited this problem in working on a co-authored book chapter with Kiel on our forthcoming collection for Springer entitled ‘Advances in Physiological Computing’ due out next May.

On reflection, much of my difficulty revolved around the complexity of defining meaningful interaction in context.  For systems like BCI or ocular control, where input control is the key function, the meaningfulness of the HCI is self-evident.  If I want an avatar to move forward, I expect my BCI to translate that intention into analogous action at the interface.   But biocybernetic systems, where spontaneous psychophysiology is monitored, analysed and classified, are a different story.  The goal of this system is to adapt in a timely and appropriate fashion and evaluating the literal meaning of that kind of interaction is complex for a host of reasons.

Continue reading

Share This:

The Epoc and Your Next Job Interview


Imagine you are waiting to be interviewed for a job that you really want.  You’d probably be nervous, fingers drumming the table, eyes restlessly staring around the room.  The door opens and a man appears, he is wearing a lab coat and he is holding an EEG headset in both hands.  He places the set on your head and says “Your interview starts now.”

This Philip K Dick scenario became reality for intern applicants at the offices of TBWA who are an advertising firm based in Istanbul.  And thankfully a camera was present to capture this WTF moment for each candidate so this video could be uploaded to Vimeo.

The rationale for the exercise is quite clear.  The company want to appoint people who are passionate about advertising, so working with a consultancy, they devised a test where candidates watch a series of acclaimed ads and the Epoc is used to measure their levels of ‘passion’ ‘love’ and ‘excitement’ in a scientific and numeric way.  Those who exhibit the greatest passion for adverts get the job (this is the narrative of the movie; in reality one suspects/hopes they were interviewed as well).

I’ve seen at least one other blog post that expressed some reservations about the process.

Let’s take a deep breath because I have a whole shopping list of issues with this exercise.

Continue reading

Share This: