Ambulatory Psychophysiology and Temporal Perception

I’ve been working as ‘psychophysiologist for hire’ on a project with my LJMU psychology colleague Ruth Ogden.  We just published our first paper from this project, together with our collaborators, Chelsea Dobbins (University of Queensland), Jason McIntyre (also LJMU Psychology) and Kate Slade (University of Lancaster).  With one thing or another, the paper was in development for a long time, mostly because of the volume of data, but partly because data analyses were quite labour-intensive.

Ruth originally initiated the project and got the funding as part of her ongoing research into time perception.  She’d already collected some laboratory data that linked activation of the sympathetic nervous system with a subjective perception that time was passing more quickly than usual.  There was already a precedent in the literature for increased body temperature distorting time perception and making it seem as though time was passing quickly.  There is also an influential paper that links autonomic activation regulated by the anterior insular cortex with distortions in temporal perception.

The motivation for the current study was to test whether increased sympathetic activation had any influence on temporal perception during everyday life.  At this point, enter the ‘psychophysiologist for hire’ with a bunch of wearable sensors to measure electrocardiogram (ECG) and electrodermal activity (EDA).

 

Continue reading

Share This:

Mental Workload, Attention and Limits on Human Cognition

 

I recently co-authored this paper on mental workload with colleagues at ISAE-AERO from Toulouse.  Frederic Dehais invited me to contribute to a paper that he had under development, which was based around the diagram you can see above this post.

I was very happy to be involved and have an opportunity to mull over the topic of mental workload and its measurement, which has always long been an equal source of interest and frustration.  Back in the 1990s sometime, I remember a conference presentation where the speaker opened with a spiel that went something like this – ‘when I told my bosses I was doing a study on mental workload, they said mental workload?  Didn’t we solve that problem last year?’  Well, nobody had solved that problem that year or any other year since, and mental workload remains a significant topic in human factors psychology.

The development of psychological concepts, like mental workload, traditionally proceeds along two distinct lines or strands, these being theory and measurement or testing.  This twofold approach was certainly true of the early days of mental workload in the late 1970s and early 1980s, when resource models of human information processing were rapidly evolving and informing the development of multidimensional workload measures drawn from subjective self-report, performance and psycho/neuro-physiology.   But as time passed, mental workload research developed a definite bias in the direction of measurement at the expense of theory.  This shift is not that surprising given the applied nature of mental workload research, but when I read this state-of-the-art review of mental workload published in Ergonomics five years ago, I couldn’t help noticing how little had changed on the theoretical side.  The notion of finite capacity limitations on cognitive performance still pervades this whole field of activity, but deeper questions about these resource limits (e.g., what are they?  What mechanisms are involved?) are rarely addressed.  This is a problem, especially for applied work in human factors, because it becomes difficult to draw inferences from our measures and make solid predictions about performance impairment that go beyond the obvious.

Continue reading

Share This:

How do Computer Games Distract People from Pain?

 

Medical professionals know that distraction is an effective way to distract the patient from a painful procedure, especially when the patient is a child.  As a result, there is a lot of work devoted to understanding how technology can distract from pain, particularly using VR in the clinic.  The basic idea here is that VR and related technologies have an immersive quality and it is this immersive quality that enables distraction from pain.

When you start to dig into the semantics of immersive technologies, it’s clear that the word is being used in slightly different ways.   For VR research, immersion is about creating a convincing illusion of place and an equally convincing version of the body to move through this virtual space.  With respect to gaming research, immersion is a graded state of attention experienced by the player of the game.  Some games can be played while the player conducts a conversation with someone else, others make more strenuous demands and require total concentration, evoking grunts or monosyllables to any unwelcome attempts at conversations – and a small number of games occupy attention so completely that any attempt to converse will not even be heard by the player.

Moving away from technology, there’s also a load of work in the field of pain research on the relationship between selective attention and pain.  According to this perspective, painful sensations call attention to themselves at source, whatever that is, either a hand placed unthinkingly on a hot oven or a foot pierced by a nail.  This cry for attention interrupts all other thought processes if the pain is extreme, and so it should from an evolutionary perspective.  But the evidence suggests that awareness of painful sensations can be reduced (and tolerance for pain enhanced) by having participants perform cognitive tasks that are very demanding, such as memorising material or doing mental arithmetic.  High levels of concentration on a cognitive task makes it harder for painful sensations to call attention to themselves.

So, we see an obvious point of convergence between games and research into pain, namely that painful sensations require attention, which is limited and highly selective, hence we can ‘dampen’ attention to pain by providing the person with an activity that fully occupies their attentional capacity.

We recently published an experimental paper on the relationship between immersion during gaming and the experience of pain in the International Journal of Human-Computer Studies.  The infographic at the top of this post gives a brief study of the work and the four studies included in the paper.

The work was motivated by a desire to understand the influence of two contributions to immersive experiences during games: hardware quality and cognitive demands.  Playing a game in VR or on a huge 4K TV screen with surround sound is great of course, but are those kinds of high quality ‘immersive’ displays necessary for distraction from pain?  On the flip side of this coin, we have the level of cognitive engagement required to interact with the technology.  Engagement can be described as the level of effortful striving required to fulfil the goals of the game.  This dimension captures the level of mental and perceptual demands made on the person by the game.  In order for a game (any kind of task) to attract selective attention, it is important for the player to engage with the mechanics and the goals of the game.

In the paper, we conducted four studies to understand the influence of hardware and cognition on pain tolerance during game play. We started from the position that highest pain tolerance would be observed when display was immersive and cognitive engagement was high.

Continue reading

Share This:

Intelligent Wearables

Accuracy is fundamental to the process of scientific measurement, we expect our gizmos and sensors to deliver data that is both robust and precise. If accurate data are available, reliable inferences can be made about whatever you happen to be measuring, these inferences inform understanding and prediction of future events. But absence of accuracy is disastrous, if we cannot trust the data then the rug is pulled out from under the scientific method.

Having worked as a psychophysiologist for longer than I care to remember, I’m acutely aware of this particular house of cards. Even if your ECG or SCL sensor is working perfectly, there are always artefacts that can affect data in a profound way: this participant had a double-espresso before they came to the lab, another is persistently and repeatedly scratching their nose. Psychophysiologists have to pay attention to data quality because the act of psychophysiological inference is far from straightforward*. In a laboratory where conditions are carefully controlled, these unwelcome interventions from the real-world are handled by a double strategy – first of all, participants are asked to sit still and refrain from excessive caffeine consumption etc., and if that doesn’t work, we can remove the artefacts from the data record by employing various forms of post-hoc analyses.

Working with physiological measures under real-world conditions, where people can drink coffee and dance around the table if they wish, presents a significant challenge for all the reasons just mentioned. So, why would anyone even want to do it? For the applied researcher, it’s a risk worth taking in order to get a genuine snapshot of human behaviour away from the artificialities of the laboratory. For people like myself, who are interested in physiological computing and using these data as inputs to technological systems, the challenge of accurate data capture in the real world is a fundamental issue. People don’t use technology in a laboratory, they use it out there in offices and cars and cafes and trains – and if we can’t get physiological computing systems to work ‘out there’ then one must question whether this form of technology is really feasible.

Continue reading

Share This:

Neurofeedback and the Attentive Brain

glasses

The act of paying attention or sustaining concentration is a good example of everyday cognition.  We all know the difference between an attentive state of being, when we are utterly focused and seem to absorb every ‘bit’ of information, and the diffuse experience of mind-wandering where consciousness flits from one random topic to the next.  Understanding this distinction is easy but the act of regulating the focus of attention can be a real challenge, especially if you didn’t get enough sleep or you’re not particularly interested in the task at hand.  Ironically if you are totally immersed in a task, attention is absorbed to the extent that you don’t notice your clarity of focus.  At the other extreme, if you begin to day-dream, registering any awareness of your inattentive state is very unlikely.

The capacity to self-regulate attentional focus is an important skill for many people, from the executives who sit in long meetings where important decisions are made to air traffic controllers, pilots, truck drivers and other professionals for whom the ability to concentrate has real consequences for the safety of themselves and others.

Technology can play a role in developing the capacity to regulate attentional focus.  The original biocybernetic loop developed at NASA was an example of how to incorporate a neurofeedback mechanism into the cockpit in order to ensure a level of awareness that was conducive with safe performance.  There are two components within type of system: real-time analysis of brain activity as a proxy of attention and translation of these data into ‘live’ feedback to the user.  The availability of explicit, real-time feedback on attentional state acts as an error signal to indicate the loss of concentration.

This article will tell a tale of two cultures, an academic paper that updates biocybernetic control of attention via real-time fMRI and a kickstarter project where the loop is encapsulated within a wearable device.

Continue reading

Share This:

Can Physiological Computing Create Smart Technology?

smart

The phrase “smart technology” has been around for a long time.  We have smart phones and smart televisions with functional capability that is massively enhanced by internet connectivity.  We also talk about smart homes that scale up into smart cities.  This hybrid between technology and the built environment promotes connectivity but with an additional twist – smart spaces monitor activity within their confines for the purposes of intelligent adaptation: to switch off lighting and heating if a space is uninhabited, to direct music from room to room as the inhabitant wanders through the house.

If smart technology is equated with enhanced connectivity and functionality, do those things translate into an increase of machine intelligence?  In his 2007 book ‘The Design Of Future Things‘, Donald Norman defined the ‘smartness’ of technology with respect to the way in which it interacted with the human user.  Inspired by J.C.R. Licklider’s (1960) definition of man-computer symbiosis, he claimed that smart technology was characterised by a harmonious partnership between person and machine.  Hence, the ‘smartness’ of technology is defined by the way in which it responds to the user and vice versa.

One prerequisite for a relationship between person and machine that is  cooperative and compatible is to enhance the capacity of technology to monitor user behaviour.  Like any good butler, the machine needs to increase its  awareness and understanding of user behaviour and user needs.  The knowledge gained via this process can subsequently be deployed to create intelligent forms of software adaptation, i.e. machine-initiated responses that are both timely and intuitive from a human perspective.  This upgraded form of  human-computer interaction is attractive to technology providers and their customers, but is it realistic and achievable and what practical obstacles must be overcome?

Continue reading

Share This:

What’s The Deal With Brain-to-Brain Interfaces?

Untitled

When I first heard the term ‘brain-to-brain interfaces’, my knee-jerk response was – don’t we already have those?  Didn’t we used to call them people?  But sarcasm aside, it was clear that a new variety of BCI technology had arrived, complete with its own corporate acronym ‘B2B.’

For those new to the topic, brain-to-brain interfaces represent an amalgamation of two existing technologies.  Input is represented by volitional changes in the EEG activity of the ‘sender’ as would be the case for any type of ‘active’ BCI.  This signal is converted into an input signal for a robotised version of transcrannial magnetic stimulation (TMS) placed at a strategic location on the head of the ‘receiver.’

TMS works by discharging an electrical current in brief pulses via a stimulating coil.  These pulses create a magnetic field that induces an electrical current in the surface of the cortex that is sufficiently strong to induce neuronal depolarisation.  Because activity in the brain beneath the coil is directly modulated by this current, TMS is capable of inducing specific types of sensory phenomena or behaviour.  You can find an introduction to TMS here (it’s an old pdf but freely available).

A couple of papers were published in PLOS One at the end of last year describing two distinct types of brain-to-brain interface between humans.

Continue reading

Share This:

Comfort and Comparative Performance of the Emotiv EPOC

emotiv-headset

I’ve written a couple of posts about the Emotiv EPOC over the years of doing the blog, from user interface issues in this post and the uncertainties surrounding the device for customers and researchers here.

The good news is that research is starting to emerge where the EPOC has been systematically compared to other devices and perhaps some uncertainties can be resolved. The first study comes from the journal Ergonomics from Ekandem et al and was published in 2012. You can read an abstract here (apologies to those without a university account who can’t get behind the paywall). These authors performed an ergonomic evaluation of both the EPOC and the NeuroSky MindWave. Data was obtained from 11 participants, each of whom wore either a Neurosky or an EPOC for 15min on different days. They concluded that there was no clear ‘winner’ from the comparison. The EPOC has 14 sites compared to the single site used by the MindWave hence it took longer to set up and required more cleaning afterwards (and more consumables). No big surprises there. It follows that signal acquisition was easier with the MindWave but the authors report that once the EPOC was connected and calibrated, signal quality was more consistent than the MindWave despite sensor placement for the former being obstructed by hair.

Continue reading

Share This:

Manifest.AR: Invisible ARtaffects

First of all, apologies for our blog “sabbatical” – the important thing is that we are now back with news of our latest research collaboration involving FACT (Foundation for Art and Creative Technology) and international artists’ collective Manifest.AR.

To quickly recap, our colleagues at FACT were keen to create a new commission tapping into the use of augmented reality technology and incorporating elements of our own work on physiological computing.  Our last post (almost a year ago now to our shame) described the time we spent with Manfest.AR last summer and our show-and-tell event at FACT.  Fast-forward to the present and the Manifest.AR piece called Invisible ARtaffects opened last Thursday as part of the Turning FACT Inside Out show.

manar_exhibit

Continue reading

Share This:

Troubleshooting and Mind-Reading: Developing EEG-based interaction with commercial systems

With regards to the development of physiological computing systems, whether they are BCI applications or fall into the category of affective computing, there seems (to me) to be two distinct types of research community at work. The first (and oldest) community are university-based academics, like myself, doing basic research on measures, methods and prototypes with the primary aim of publishing our work in various conferences and journals. For the most part, we are a mixture of psychologists, computer scientists and engineers, many of whom have an interest in human-computer interaction. The second community formed around the availability of commercial EEG peripherals, such as the Emotiv and Neurosky. Some members of this community are academics and others are developers, I suspect many are dedicated gamers. They are looking to build applications and hacks to embellish interactive experience with a strong emphasis on commercialisation.

There are many differences between the two groups. My own academic group is ‘old-school’ in many ways, motivated by research issues and defined by the usual hierarchies associated with specialisation and rank. The newer group is more inclusive (the tag-line on the NeuroSky site is “Brain Sensors for Everyone”); they basically want to build stuff and preferably sell it.

Continue reading

Share This: