Neuroadaptive Technology as Symmetrical Human-Computer Interaction

Back in 2003, Lawrence Hettinger and colleagues penned this paper on the topic of neuroadaptive interface technology. This concept described a closed-loop system where fluctuations in cognitive activity or emotional state informs the functional characteristics of an interface. The core concept sits comfortably with a host of closed-loop technologies in the domain of physiological computing.

One great insight from this 2003 paper was to describe how neuroadaptive interfaces could enhance communication between person and system. They argued that human-computer interaction currently existed in an asymmetrical form. The person can access a huge amount of information about the computer system (available RAM, number of active operations) but the system is fundamentally ‘blind’ to the intentions of the user or their level of mental workload, frustration or fatigue. Neuroadaptive interfaces would enable symmetrical forms of human-computer interaction where technology can respond to implicit changes in the human nervous system, and most significantly, interpret those covert sources of data in order to inform responses at the interface.

Allowing humans to communicate implicitly with machines in this way could enormously increase the efficiency of human-computer interaction with respects to ‘bits per second’. The keyboard, mouse and touchscreen remain the dominant modes of input control by which we translate thoughts into action in the digital realm. We communicate with computers via volitional acts of explicit perceptual-motor control – the same asymmetrical/explicit model of HCI holds true for naturalistic modes of input control, such as speech and gestures. The concept of a symmetrical HCI based on implicit signals that are generated spontaneously and automatically by the user represents a significant shift from conventional modes of input control.

This recent paper published in PNAS by Thorsten Zander and colleagues provides a demonstration of a symmetrical, neuroadaptive interface in action.

Continue reading

Share This:

Special Issue of Interacting with Computers

iwc_oxfordjournals_org_content_27_5_local_front-matter_pdf

 

I am one of the co-editors of a special issue of the Interacting With Computers, which is now available online here.   The title for the special issue is Physiological Computing for Intelligent Adaptation, it contains five full research papers covering a range of topics such as:  use of VR for stress reduction, mental workload monitoring and a comparison of EEG headsets.

Share This:

Comfort and Comparative Performance of the Emotiv EPOC

emotiv-headset

I’ve written a couple of posts about the Emotiv EPOC over the years of doing the blog, from user interface issues in this post and the uncertainties surrounding the device for customers and researchers here.

The good news is that research is starting to emerge where the EPOC has been systematically compared to other devices and perhaps some uncertainties can be resolved. The first study comes from the journal Ergonomics from Ekandem et al and was published in 2012. You can read an abstract here (apologies to those without a university account who can’t get behind the paywall). These authors performed an ergonomic evaluation of both the EPOC and the NeuroSky MindWave. Data was obtained from 11 participants, each of whom wore either a Neurosky or an EPOC for 15min on different days. They concluded that there was no clear ‘winner’ from the comparison. The EPOC has 14 sites compared to the single site used by the MindWave hence it took longer to set up and required more cleaning afterwards (and more consumables). No big surprises there. It follows that signal acquisition was easier with the MindWave but the authors report that once the EPOC was connected and calibrated, signal quality was more consistent than the MindWave despite sensor placement for the former being obstructed by hair.

Continue reading

Share This:

Troubleshooting and Mind-Reading: Developing EEG-based interaction with commercial systems

With regards to the development of physiological computing systems, whether they are BCI applications or fall into the category of affective computing, there seems (to me) to be two distinct types of research community at work. The first (and oldest) community are university-based academics, like myself, doing basic research on measures, methods and prototypes with the primary aim of publishing our work in various conferences and journals. For the most part, we are a mixture of psychologists, computer scientists and engineers, many of whom have an interest in human-computer interaction. The second community formed around the availability of commercial EEG peripherals, such as the Emotiv and Neurosky. Some members of this community are academics and others are developers, I suspect many are dedicated gamers. They are looking to build applications and hacks to embellish interactive experience with a strong emphasis on commercialisation.

There are many differences between the two groups. My own academic group is ‘old-school’ in many ways, motivated by research issues and defined by the usual hierarchies associated with specialisation and rank. The newer group is more inclusive (the tag-line on the NeuroSky site is “Brain Sensors for Everyone”); they basically want to build stuff and preferably sell it.

Continue reading

Share This:

iBrain

I just watched a TEDMED talk about the iBrain device via this link on the excellent Medgadget resource.  The iBrain is a single-channel EEG recording collected via ‘dry’ electrodes where the data is stored in a conventional handheld device such as a cellphone.  In my opinion, the clever part of this technology is the application of mathematics to wring detailed information out of  a limited data set – it’s a very efficient strategy.

The hardware looks to be fairly standard – a wireless EEG link to a mobile device.  But its simplicity provides an indication of where this kind of physiological computing application could be going in the future – mobile monitoring for early detection of medical problems piggy-backing onto conventional technology.  If physiological computing applications become widespread, this kind of proactive medical monitoring could become standard.  And the main barrier to that is non-intrusive, non-medicalised sensor development.

In the meantime, Neurovigil, the company behind the product, recently announced a partnership with Swiss pharmaceutical giants Roche who want to apply this technology to clinical drug trials.  I guess the methodology focuses the drug companies to consider covert changes in physiology as a sensitive marker of drug efficacy or side-effects.

I like the simplicity of the iBrain (1 channel of EEG) but speaker make some big claims for their analysis, the implicit ones deal with the potential of EEG to identify neuropathologies.  That may be possible but I’m sceptical about whether 1 channel is sufficient.  The company have obviously applied their pared-down analysis to sleep stages with some success but I was left wondering what added value the device provided compared to less-intrusive movement sensors used to analyse sleep behaviour, e.g. the Actiwatch

Share This:

This is your brain giving up

Like a lot of people, I came to the area of physiological computing via affective computing.  The early work I read placed enormous emphasis on how systems may distinguish different categories of emotion, e.g. frustration vs. happiness.  This is important for some applications, but most of all I was interested in user states that related to task performance, specifically those states that might precede and predict a breakdown of performance.  The latter can take several forms, the quality of performance can collapse because the task is too complex to figure out or you’re too tired or too drunk etc.  What really interested me was how performance collapsed when people simply gave up or ‘exhibited insufficient motivation’ as the psychological textbooks would say.

People can give up for all kinds of reasons – they may be insufficiently challenged (i.e. bored), they may be frustrated because the task is too hard, they may simply have something better to do.  The prediction of motivation or task engagement seems very important to me for biocybernetic adaptation applications, such as games and educational software. Several psychology research groups have looked at this issue by studying psychophysiological changes accompanying changes in motivation and responses to increased task demand.  A group led by Alan Gevins performed a number of studies where they incrementally ramped up task demand; they found that theta activity in the EEG increased in line with task demands.  They noted this increase was specific to the frontal-central area of the brain.

We partially replicated one of Gevins’ studies last year and found support for changes in frontal theta.  We tried to make the task very difficult so people would give up but were not completely successful (when you pay people to come to your lab, they tend to try really hard).  So we did a second study, this time making the ‘impossible’ version of the task really impossible.  The idea was to expose people to low, high and extremely high levels of memory load.  In order to make the task impossible, we also demanded participants hit a minimum level of performance, which was modest for the low demand condition and insanely high for the extremely high demand task.  We also had our participants do each task on two occasions; once with the chance to win cash incentives and once without.

The results for the frontal theta are shown in the graphic below.  You can clearly see the frontal-central location of the activity (nb: the more red the area, the more theta activity was present).  What’s particularly interesting and especially clear in the incentive condition (top row of graphic) is that our participants reduced theta activity when they thought they didn’t have a chance.  As one might suspect, task engagement includes a strong component of volition and brain activity should reflect the decision to give up and disengage from the task.  We’ll be following up this work to investigate how we might use the ebb and flow of frontal theta to capture and integrate task engagement into a real-time system.

Share This: