The Log Roll of Intelligent Adaptation

I originally coined the term ‘physiological computing’ to describe a whole class of emerging technologies constructed around closed-loop control.  These technologies collected implicit measures from the brain and body of the user, which informed a process of intelligent adaptation at the user interface.

If you survey research in this field, from mental workload monitoring to applications in affective computing, there’s an overwhelming bias towards the first part of the closed-loop – the business of designing sensors, collecting data and classifying psychological states.  In contrast, you see very little on what happens at the interface once target states have been detected.  The dearth of work on intelligent adaptation is a problem because signal processing protocols and machine learning algorithms are being developed in a vacuum – without any context for usage.  This disconnect both neglects and negates the holistic nature of closed-loop control and the direct link between classification and adaptation.  We can even generate a maxim to describe the relationship between the two:

the number of states recognised by a physiological computing system should be minimum required to support the range of adaptive options that can be delivered at the interface

This maxim minimises the number of states to enhance classification accuracy, while making an explicit link between the act of measurement at the first part of the loop with the process of adaptation that is the last link in the chain.

If this kind of stuff sounds abstract or of limited relevance to the research community, it shouldn’t.  If we look at research into the classic ‘active’ BCI paradigm, there is clear continuity between state classification and corresponding actions at the interface.  This continuity owes its prominence to the fact that the BCI research community is dedicated to enhancing the lives of end users and the utility of the system lies at the core of their research process.  But to be fair, the link between brain activation and input control is direct and easy to conceptualise in the ‘active’ BCI paradigm.  For those systems that working on an implicit basis, detection of the target state is merely the jumping off point for a complicated process of user interface design.

Continue reading

Share This:

Neuroadaptive Technology as Symmetrical Human-Computer Interaction

Back in 2003, Lawrence Hettinger and colleagues penned this paper on the topic of neuroadaptive interface technology. This concept described a closed-loop system where fluctuations in cognitive activity or emotional state informs the functional characteristics of an interface. The core concept sits comfortably with a host of closed-loop technologies in the domain of physiological computing.

One great insight from this 2003 paper was to describe how neuroadaptive interfaces could enhance communication between person and system. They argued that human-computer interaction currently existed in an asymmetrical form. The person can access a huge amount of information about the computer system (available RAM, number of active operations) but the system is fundamentally ‘blind’ to the intentions of the user or their level of mental workload, frustration or fatigue. Neuroadaptive interfaces would enable symmetrical forms of human-computer interaction where technology can respond to implicit changes in the human nervous system, and most significantly, interpret those covert sources of data in order to inform responses at the interface.

Allowing humans to communicate implicitly with machines in this way could enormously increase the efficiency of human-computer interaction with respects to ‘bits per second’. The keyboard, mouse and touchscreen remain the dominant modes of input control by which we translate thoughts into action in the digital realm. We communicate with computers via volitional acts of explicit perceptual-motor control – the same asymmetrical/explicit model of HCI holds true for naturalistic modes of input control, such as speech and gestures. The concept of a symmetrical HCI based on implicit signals that are generated spontaneously and automatically by the user represents a significant shift from conventional modes of input control.

This recent paper published in PNAS by Thorsten Zander and colleagues provides a demonstration of a symmetrical, neuroadaptive interface in action.

Continue reading

Share This:

Neurofeedback and the Attentive Brain

glasses

The act of paying attention or sustaining concentration is a good example of everyday cognition.  We all know the difference between an attentive state of being, when we are utterly focused and seem to absorb every ‘bit’ of information, and the diffuse experience of mind-wandering where consciousness flits from one random topic to the next.  Understanding this distinction is easy but the act of regulating the focus of attention can be a real challenge, especially if you didn’t get enough sleep or you’re not particularly interested in the task at hand.  Ironically if you are totally immersed in a task, attention is absorbed to the extent that you don’t notice your clarity of focus.  At the other extreme, if you begin to day-dream, registering any awareness of your inattentive state is very unlikely.

The capacity to self-regulate attentional focus is an important skill for many people, from the executives who sit in long meetings where important decisions are made to air traffic controllers, pilots, truck drivers and other professionals for whom the ability to concentrate has real consequences for the safety of themselves and others.

Technology can play a role in developing the capacity to regulate attentional focus.  The original biocybernetic loop developed at NASA was an example of how to incorporate a neurofeedback mechanism into the cockpit in order to ensure a level of awareness that was conducive with safe performance.  There are two components within type of system: real-time analysis of brain activity as a proxy of attention and translation of these data into ‘live’ feedback to the user.  The availability of explicit, real-time feedback on attentional state acts as an error signal to indicate the loss of concentration.

This article will tell a tale of two cultures, an academic paper that updates biocybernetic control of attention via real-time fMRI and a kickstarter project where the loop is encapsulated within a wearable device.

Continue reading

Share This:

What’s The Deal With Brain-to-Brain Interfaces?

Untitled

When I first heard the term ‘brain-to-brain interfaces’, my knee-jerk response was – don’t we already have those?  Didn’t we used to call them people?  But sarcasm aside, it was clear that a new variety of BCI technology had arrived, complete with its own corporate acronym ‘B2B.’

For those new to the topic, brain-to-brain interfaces represent an amalgamation of two existing technologies.  Input is represented by volitional changes in the EEG activity of the ‘sender’ as would be the case for any type of ‘active’ BCI.  This signal is converted into an input signal for a robotised version of transcrannial magnetic stimulation (TMS) placed at a strategic location on the head of the ‘receiver.’

TMS works by discharging an electrical current in brief pulses via a stimulating coil.  These pulses create a magnetic field that induces an electrical current in the surface of the cortex that is sufficiently strong to induce neuronal depolarisation.  Because activity in the brain beneath the coil is directly modulated by this current, TMS is capable of inducing specific types of sensory phenomena or behaviour.  You can find an introduction to TMS here (it’s an old pdf but freely available).

A couple of papers were published in PLOS One at the end of last year describing two distinct types of brain-to-brain interface between humans.

Continue reading

Share This: