Intelligent Wearables

Accuracy is fundamental to the process of scientific measurement, we expect our gizmos and sensors to deliver data that is both robust and precise. If accurate data are available, reliable inferences can be made about whatever you happen to be measuring, these inferences inform understanding and prediction of future events. But absence of accuracy is disastrous, if we cannot trust the data then the rug is pulled out from under the scientific method.

Having worked as a psychophysiologist for longer than I care to remember, I’m acutely aware of this particular house of cards. Even if your ECG or SCL sensor is working perfectly, there are always artefacts that can affect data in a profound way: this participant had a double-espresso before they came to the lab, another is persistently and repeatedly scratching their nose. Psychophysiologists have to pay attention to data quality because the act of psychophysiological inference is far from straightforward*. In a laboratory where conditions are carefully controlled, these unwelcome interventions from the real-world are handled by a double strategy – first of all, participants are asked to sit still and refrain from excessive caffeine consumption etc., and if that doesn’t work, we can remove the artefacts from the data record by employing various forms of post-hoc analyses.

Working with physiological measures under real-world conditions, where people can drink coffee and dance around the table if they wish, presents a significant challenge for all the reasons just mentioned. So, why would anyone even want to do it? For the applied researcher, it’s a risk worth taking in order to get a genuine snapshot of human behaviour away from the artificialities of the laboratory. For people like myself, who are interested in physiological computing and using these data as inputs to technological systems, the challenge of accurate data capture in the real world is a fundamental issue. People don’t use technology in a laboratory, they use it out there in offices and cars and cafes and trains – and if we can’t get physiological computing systems to work ‘out there’ then one must question whether this form of technology is really feasible.

Continue reading

Share This:

Neuroadaptive Technology as Symmetrical Human-Computer Interaction

Back in 2003, Lawrence Hettinger and colleagues penned this paper on the topic of neuroadaptive interface technology. This concept described a closed-loop system where fluctuations in cognitive activity or emotional state informs the functional characteristics of an interface. The core concept sits comfortably with a host of closed-loop technologies in the domain of physiological computing.

One great insight from this 2003 paper was to describe how neuroadaptive interfaces could enhance communication between person and system. They argued that human-computer interaction currently existed in an asymmetrical form. The person can access a huge amount of information about the computer system (available RAM, number of active operations) but the system is fundamentally ‘blind’ to the intentions of the user or their level of mental workload, frustration or fatigue. Neuroadaptive interfaces would enable symmetrical forms of human-computer interaction where technology can respond to implicit changes in the human nervous system, and most significantly, interpret those covert sources of data in order to inform responses at the interface.

Allowing humans to communicate implicitly with machines in this way could enormously increase the efficiency of human-computer interaction with respects to ‘bits per second’. The keyboard, mouse and touchscreen remain the dominant modes of input control by which we translate thoughts into action in the digital realm. We communicate with computers via volitional acts of explicit perceptual-motor control – the same asymmetrical/explicit model of HCI holds true for naturalistic modes of input control, such as speech and gestures. The concept of a symmetrical HCI based on implicit signals that are generated spontaneously and automatically by the user represents a significant shift from conventional modes of input control.

This recent paper published in PNAS by Thorsten Zander and colleagues provides a demonstration of a symmetrical, neuroadaptive interface in action.

Continue reading

Share This:

Funded PhD studentship on Physiological Computing and VR

The School of Natural Sciences and Psychology, in partnership with the Department of Computer Science and General Engineering Research Institute, are working on adaptive technologies in the area of physiological computing. This studentship is co-funded by Emteq Ltd: emteq.net Applications are invited for a three-year full studentship in this field of research. The studentship includes tuition fees (currently £4,100 per annum) plus a tax-free maintenance stipend (currently £14,296 per annum). Applicants must be UK/EU nationals. The programme of research is concerned with automatic recognition of emotional states based on measurements of facial electromyography (fEMG) and autonomic activity. The ability of these measures to successfully differentiate positive and negative emotional states will be explored by developing mood induction protocols in virtual reality (VR). Successful applicants will conduct research into the development of adaptive/affective VR scenarios designed to maximise the effectiveness of mood induction.

For full details, click this link

Closing Date for applications: Friday 3rd March 2017

Share This:

Neuroadaptive Technology Conference 2017

neuroadaptive_technology_conference_2017

The first Neuroadaptive Technology Conference will take place in Berlin on the 19th-21st July 2017. Details will appear at the conference website. Authors are invited to submit abstracts by the 13th March 2017 at the conference website.

Share This:

IEEE Computer Special Issue on Physiological Computing

intro_pdf__page_2_of_5_

The October 2015 edition of IEEE Computer magazine is devoted to the topic of Physiological Computing.  Giulio Jacucci, myself and Erin Solovey acted as co-editors and the introduction for the magazine is available here.

The paper included in the special issue cover a range of topics, including: measurement of stress in VR, combining pupilometry with EEG to detect changes in operator workload and using mobile neuroimaging to create attention-aware technologies.

There is also a podcast associated with the SI featuring the guest editors in conversation with Robert Jacobs from Tufts University on current topics and future directions in Physiological Computing – you can hear it here.

Share This:

Special Issue of Interacting with Computers

iwc_oxfordjournals_org_content_27_5_local_front-matter_pdf

 

I am one of the co-editors of a special issue of the Interacting With Computers, which is now available online here.   The title for the special issue is Physiological Computing for Intelligent Adaptation, it contains five full research papers covering a range of topics such as:  use of VR for stress reduction, mental workload monitoring and a comparison of EEG headsets.

Share This:

Neurofeedback and the Attentive Brain

glasses

The act of paying attention or sustaining concentration is a good example of everyday cognition.  We all know the difference between an attentive state of being, when we are utterly focused and seem to absorb every ‘bit’ of information, and the diffuse experience of mind-wandering where consciousness flits from one random topic to the next.  Understanding this distinction is easy but the act of regulating the focus of attention can be a real challenge, especially if you didn’t get enough sleep or you’re not particularly interested in the task at hand.  Ironically if you are totally immersed in a task, attention is absorbed to the extent that you don’t notice your clarity of focus.  At the other extreme, if you begin to day-dream, registering any awareness of your inattentive state is very unlikely.

The capacity to self-regulate attentional focus is an important skill for many people, from the executives who sit in long meetings where important decisions are made to air traffic controllers, pilots, truck drivers and other professionals for whom the ability to concentrate has real consequences for the safety of themselves and others.

Technology can play a role in developing the capacity to regulate attentional focus.  The original biocybernetic loop developed at NASA was an example of how to incorporate a neurofeedback mechanism into the cockpit in order to ensure a level of awareness that was conducive with safe performance.  There are two components within type of system: real-time analysis of brain activity as a proxy of attention and translation of these data into ‘live’ feedback to the user.  The availability of explicit, real-time feedback on attentional state acts as an error signal to indicate the loss of concentration.

This article will tell a tale of two cultures, an academic paper that updates biocybernetic control of attention via real-time fMRI and a kickstarter project where the loop is encapsulated within a wearable device.

Continue reading

Share This:

We Need To Talk About Clippy

Clippy
Everyone who used MS Office between 1997 and 2003 remembers Clippy.  He was a help avatar designed to interact with the user in a way that was both personable and predictive.  He was a friendly sales assistant combined with a butler who anticipated all your needs.  At least, that was the idea.  In reality, Clippy fell well short of those expectations, he was probably the most loathed feature of those particular operating systems; he even featured in this Time Magazine list of world’s worst inventions, a list that also includes Agent Orange and the Segway.

In an ideal world, Clippy would have responded to user behaviour in ways that were intuitive, timely and helpful.  In reality, his functionality was limited, his appearance often intrusive and his intuition was way off. Clippy irritated so completely that his legacy lives on over ten years later.  If you describe the concept of an intelligent adaptive interface to most people, half of them recall the dreadful experience of Clippy and the rest will probably be thinking about HAL from 2001: A Space Odyssey.  With those kinds of role models, it’s not difficult to understand why users are in no great hurry to embrace intelligent adaptation at the interface.

In the years since Clippy passed, the debate around machine intelligence has placed greater emphasis on the improvisational spark that is fundamental to displays of human intellect.  This recent article in MIT Technology Review makes the point that a “conversation” with Eugene Goostman (the chatter bot who won a Turing Test competition at Bletchley Park in 2012) lacks the natural “back and forth” of human-human communication.  Modern expectations of machine intelligence go beyond a simple imitation game within highly-structured rules, users are looking for a level of spontaneity and nuance that resonates with their human sense of what other people are.

But one of the biggest problems with Clippy was not simply intrusiveness but the fact that his repertoire of responses was very constrained, he could ask if you were writing a letter (remember those?) and precious little else.

Continue reading

Share This:

Can Physiological Computing Create Smart Technology?

smart

The phrase “smart technology” has been around for a long time.  We have smart phones and smart televisions with functional capability that is massively enhanced by internet connectivity.  We also talk about smart homes that scale up into smart cities.  This hybrid between technology and the built environment promotes connectivity but with an additional twist – smart spaces monitor activity within their confines for the purposes of intelligent adaptation: to switch off lighting and heating if a space is uninhabited, to direct music from room to room as the inhabitant wanders through the house.

If smart technology is equated with enhanced connectivity and functionality, do those things translate into an increase of machine intelligence?  In his 2007 book ‘The Design Of Future Things‘, Donald Norman defined the ‘smartness’ of technology with respect to the way in which it interacted with the human user.  Inspired by J.C.R. Licklider’s (1960) definition of man-computer symbiosis, he claimed that smart technology was characterised by a harmonious partnership between person and machine.  Hence, the ‘smartness’ of technology is defined by the way in which it responds to the user and vice versa.

One prerequisite for a relationship between person and machine that is  cooperative and compatible is to enhance the capacity of technology to monitor user behaviour.  Like any good butler, the machine needs to increase its  awareness and understanding of user behaviour and user needs.  The knowledge gained via this process can subsequently be deployed to create intelligent forms of software adaptation, i.e. machine-initiated responses that are both timely and intuitive from a human perspective.  This upgraded form of  human-computer interaction is attractive to technology providers and their customers, but is it realistic and achievable and what practical obstacles must be overcome?

Continue reading

Share This:

What’s The Deal With Brain-to-Brain Interfaces?

Untitled

When I first heard the term ‘brain-to-brain interfaces’, my knee-jerk response was – don’t we already have those?  Didn’t we used to call them people?  But sarcasm aside, it was clear that a new variety of BCI technology had arrived, complete with its own corporate acronym ‘B2B.’

For those new to the topic, brain-to-brain interfaces represent an amalgamation of two existing technologies.  Input is represented by volitional changes in the EEG activity of the ‘sender’ as would be the case for any type of ‘active’ BCI.  This signal is converted into an input signal for a robotised version of transcrannial magnetic stimulation (TMS) placed at a strategic location on the head of the ‘receiver.’

TMS works by discharging an electrical current in brief pulses via a stimulating coil.  These pulses create a magnetic field that induces an electrical current in the surface of the cortex that is sufficiently strong to induce neuronal depolarisation.  Because activity in the brain beneath the coil is directly modulated by this current, TMS is capable of inducing specific types of sensory phenomena or behaviour.  You can find an introduction to TMS here (it’s an old pdf but freely available).

A couple of papers were published in PLOS One at the end of last year describing two distinct types of brain-to-brain interface between humans.

Continue reading

Share This: