Neuroadaptive Technology Conference 2022

The third conference on Neuroadaptive Technology (NAT’23) is scheduled to take place later this year, on October 10th-12th to be precise.  Thanks to the pandemic, it’s been almost three years since our last event in Liverpool, the proceedings of which were published at the end of last year.  Because of the enforced break, my co-organiser Thorsten Zander and myself have thought carefully about the format and focus of NAT’23 and how the meeting can advance the field and be as useful as possible to the research community.

The format of the first two meetings took the traditional format of a small academic conference with a single session of speaker presentations over two-and-a-half days.  We also invited many keynote speakers (relative to the size of the conference) as a strategy to draw together disparate strands of research related to NAT, from Brain-Computer Interfaces to Neuroergonomics, taking in technical issues like EEG Signal Processing and societal implications around Ethics and Privacy. This range of research that pertains to NAT is reflected in our edited collection, which includes contributions to the 2019 conference, abstracts from both 2017 and 2019 meetings, and a couple of review chapters to define NAT and question how the development of this technology can enhance human-computer interaction.  Having constructed this ‘base’ (so to speak), we plan to turn our attention to next steps as we look to the 2023 meeting and beyond.

NAT is now well-defined as a closed-loop neurotechnology that is capable of real-time neurophysiological monitoring and delivering implicit forms of human-computer interaction.  This technology has widespread applications, from safety-critical performance to entertainment and digital health.  This closed-loop neurotechnology requires expertise from various disciplines, such as, neuroscience, engineering, mathematics, and computer science.  And that is a fundamental challenge for this conference, because conferences are built around research communities (and vice versa), and those communities are generally defined by convergence, with respect to topics, theories, or methods.  Whereas the type of multidisciplinary research that is the norm for NAT is characterised by the divergent methods being applied in the service of a common goal or question.

Therefore, we are pushing for a closer connection between people working on NAT and the development of NAT applications, and researchers in the field of AI.  We believe that AI is fundamental to how NAT will develop and evolve over the next ten to fifteen years – and we’re looking to start that dialogue and convergence with the 2023 NAT Conference.

You can find more details about the conference and how to register here.

Share This:

Current Research In Neuroadaptive Technology


A collection of chapters and abstracts from both 2017 and 2019 Neuroadaptive Technology conferences were published last week by Elsevier, edited by myself and Thorsten Zander.  You can find full details about the content of the book directly from the Elsevier website here.

I’ve reproduced the preface from the collection below to give a sense of what the collection is about and how the book and the conferences came to be.

“Back in the late noughties, there was a period of intensified interest in the concept of a brain-computer interfaces (BCI).  The idea of using real-time measures of brain activity to communicate directly with a computer was nothing new, but BCI research (at the time) was generally constrained to medical applications and clinical groups.  The primary impetus for increased interest in the topic was the idea that people without clinical conditions could utilise BCIs, which stimulated enormous discussion about what kinds of applications were possible.  There was a feeling back then that BCI research, which had been highly niche and specialised, was spilling over into the relative mainstream of human-computer interaction.

Those early conference sessions and workshops were dominated by research on active BCI, where neurophysiological signals represent active intentions and serve as a proxy for an input control device like a mouse or a joystick.  We were in a minority during those early sessions because we were both working on system concepts where physiology was monitored implicitly, and the interface adapted with no requirement for active cognition on behalf of the user.  One of us had developed the existing concept of physiological computing into a broad category of technology where systems adapted to implicit changes in neurophysiology and psychophysiology, united by the cybernetic concept of closed-loop control.  The other had extended the concept of a BCI to include an approach called passive BCI, which implicitly monitored the user and allowed the system to develop context awareness.

When we talked, we discovered that we had at least two things in common, we were both interested in closed-loop control using signals from the brain and the body and we experienced a shared frustration that our work didn’t fit comfortably into existing conference forums.  As we both regularly attended a range of meetings, we could see work on this topic of implicit monitoring being submitted and presented, but presentations often emphasised one specific aspect, such as sensors or signal processing aspects or machine learning, which depended on the specific focus of the conference. We both felt that the ‘passive’ approach was important enough to warrant its own scientific meeting where all aspects of the closed loop design were equally represented, from sensors to signal treatment through to interface design and evaluation of the user experience.

Continue reading

Share This:

Mental Workload, Attention and Limits on Human Cognition

 

I recently co-authored this paper on mental workload with colleagues at ISAE-AERO from Toulouse.  Frederic Dehais invited me to contribute to a paper that he had under development, which was based around the diagram you can see above this post.

I was very happy to be involved and have an opportunity to mull over the topic of mental workload and its measurement, which has always long been an equal source of interest and frustration.  Back in the 1990s sometime, I remember a conference presentation where the speaker opened with a spiel that went something like this – ‘when I told my bosses I was doing a study on mental workload, they said mental workload?  Didn’t we solve that problem last year?’  Well, nobody had solved that problem that year or any other year since, and mental workload remains a significant topic in human factors psychology.

The development of psychological concepts, like mental workload, traditionally proceeds along two distinct lines or strands, these being theory and measurement or testing.  This twofold approach was certainly true of the early days of mental workload in the late 1970s and early 1980s, when resource models of human information processing were rapidly evolving and informing the development of multidimensional workload measures drawn from subjective self-report, performance and psycho/neuro-physiology.   But as time passed, mental workload research developed a definite bias in the direction of measurement at the expense of theory.  This shift is not that surprising given the applied nature of mental workload research, but when I read this state-of-the-art review of mental workload published in Ergonomics five years ago, I couldn’t help noticing how little had changed on the theoretical side.  The notion of finite capacity limitations on cognitive performance still pervades this whole field of activity, but deeper questions about these resource limits (e.g., what are they?  What mechanisms are involved?) are rarely addressed.  This is a problem, especially for applied work in human factors, because it becomes difficult to draw inferences from our measures and make solid predictions about performance impairment that go beyond the obvious.

Continue reading

Share This:

How do Computer Games Distract People from Pain?

 

Medical professionals know that distraction is an effective way to distract the patient from a painful procedure, especially when the patient is a child.  As a result, there is a lot of work devoted to understanding how technology can distract from pain, particularly using VR in the clinic.  The basic idea here is that VR and related technologies have an immersive quality and it is this immersive quality that enables distraction from pain.

When you start to dig into the semantics of immersive technologies, it’s clear that the word is being used in slightly different ways.   For VR research, immersion is about creating a convincing illusion of place and an equally convincing version of the body to move through this virtual space.  With respect to gaming research, immersion is a graded state of attention experienced by the player of the game.  Some games can be played while the player conducts a conversation with someone else, others make more strenuous demands and require total concentration, evoking grunts or monosyllables to any unwelcome attempts at conversations – and a small number of games occupy attention so completely that any attempt to converse will not even be heard by the player.

Moving away from technology, there’s also a load of work in the field of pain research on the relationship between selective attention and pain.  According to this perspective, painful sensations call attention to themselves at source, whatever that is, either a hand placed unthinkingly on a hot oven or a foot pierced by a nail.  This cry for attention interrupts all other thought processes if the pain is extreme, and so it should from an evolutionary perspective.  But the evidence suggests that awareness of painful sensations can be reduced (and tolerance for pain enhanced) by having participants perform cognitive tasks that are very demanding, such as memorising material or doing mental arithmetic.  High levels of concentration on a cognitive task makes it harder for painful sensations to call attention to themselves.

So, we see an obvious point of convergence between games and research into pain, namely that painful sensations require attention, which is limited and highly selective, hence we can ‘dampen’ attention to pain by providing the person with an activity that fully occupies their attentional capacity.

We recently published an experimental paper on the relationship between immersion during gaming and the experience of pain in the International Journal of Human-Computer Studies.  The infographic at the top of this post gives a brief study of the work and the four studies included in the paper.

The work was motivated by a desire to understand the influence of two contributions to immersive experiences during games: hardware quality and cognitive demands.  Playing a game in VR or on a huge 4K TV screen with surround sound is great of course, but are those kinds of high quality ‘immersive’ displays necessary for distraction from pain?  On the flip side of this coin, we have the level of cognitive engagement required to interact with the technology.  Engagement can be described as the level of effortful striving required to fulfil the goals of the game.  This dimension captures the level of mental and perceptual demands made on the person by the game.  In order for a game (any kind of task) to attract selective attention, it is important for the player to engage with the mechanics and the goals of the game.

In the paper, we conducted four studies to understand the influence of hardware and cognition on pain tolerance during game play. We started from the position that highest pain tolerance would be observed when display was immersive and cognitive engagement was high.

Continue reading

Share This:

Neuroadaptive Technology Conference 2019

 

The international conference on Neuroadaptive Technology will be held on the 16-18th July 2019 in Liverpool. This will be the second meeting on this topic, the first took place in Berlin two years ago. You’ll find a link at the top of this page for the schedule, registration costs and other details about the meeting.

In this short post, I’d like to give a little background for the meeting and say some things about the goals and scope of the conference. The original idea came from a conversation between myself and Thorsten Zander (my co-organiser) about the absence of any forum dedicated to this type of implicit closed-loop technology. My work on physiological computing systems was always multidisciplinary, encompassing: psychological sciences, wearable sensors, signal processing and human-computer interaction. Work in the area was being submitted and published at conferences dedicated to engineering and computer science, but these meetings always emphasised one specific aspect, such as sensors or signal processing aspects or machine learning. I wanted to have a meeting where all aspect of the closed loop were equally represented, from sensors through to interface design. On the other hand, Thorsten had developed concept of passive brain-computer interfaces where EEG signals were translated into control at the interface without any intentionality on the part of the user.

We had at least two things in common, we were both interested in closed-loop control using signals from the brain and the body and we were both frustrated that our work didn’t seem to fit comfortably into existing forums.

Thorsten took the first step and organised a passive BCI meeting at Demelhorst just outside Bremen for two (very hot) days in August 2014. On the last day of that meeting, along with the other attendees, we batted around various names with which to christen this emerging area of work. If memory serves, I don’t remember anyone coming up with a label that everyone in the room was completely endorsed. The term ‘neuroadaptive technology’ that I appropriated from this 2003 paper from Lawrence Hettinger and colleagues was the one that people were the least unhappy about – and so, when it came time to organise the first conference, that was the name that we ran with.

From the beginning, we decided to make the ‘neuro’ in the title of the conference as broad as possible, encompassing psychophysiological sensors/measures as well as those derived from neurophysiology. At that first conference, we also wanted to draw attention to the breadth of work in this field and so we invited Rob Jacob as a keynote to talk about new modes of human-computer interaction and Pim Haselager to address the ethical implications of the technology, as well as speakers on EEG signal processing. A full list of abstracts and the schedule for that 2017 meeting is available here.

The fundamental thinking behind the neuroadaptive technology conference is that despite the significant range of applications under consideration in this field, which runs from autonomous driving to marketing, researchers share a significant number of interests, such: sensor design, signal processing methods in the field, machine learning for classification, designing implicit modes of human-computer interaction, establishing methodology for evaluation – and that’s far from an exhaustive list.

And so, in Liverpool this July, we’ll be doing it all again with a wide range of speakers from around the world. The deadline for abstract submission is 31st March 2019 and we’re in the process of organising keynote speakers and a clear route to publication for the work presented at the conference.

Full details will appear at the link from the top of this page over the next few months.

Share This:

episode 3 of the mind machine is here

 

In this third episode of the podcast, I talk to Prof. Wendy Rogers from the University of Illinois about her work as Director of the Human Factors and Aging Laboratory.  Our conversation took place in October 2018 and we talk about designing technology to support everyday activities of older adults.  Wendy’s work covers a huge range of topics from measuring cognitive skills across the lifespan to understanding the process of technology adoption and acceptance.  We talk about the relationship between technology and ageing and how older users are currently at the vanguard of emerging systems, from smart homes to social robots.  We discuss whether the process of technology adoption is different for older versus younger users.  We also talk about building social relationships between older users and robots designed to care for them.

Share This:

episode 2 of the mind machine now available

My conversation with Dr. Alan Pope is now available from the Podcast link at the top of this page.  Alan’s seminal work on the biocybernetic loop was a key inspiration for developing a concept of physiological computing.  He was probably the first person to take measures from the brain and body and use them in real-time to allow the operator to implicitly communicate with technology.  Our conversation takes in the whole of his career from early work with evoked cortical potentials in clinical psychology to his move to NASA Langley and work in the field of human factors and aviation psychology

Share This:

announcing the mind machine podcast

I first got the idea to do a podcast back in the early part of the year.  Like many other academics, I enjoy the informal conversations that often happen over coffee and in the bar during a conference or meeting – and I wanted to capture those sorts of exchanges whilst giving people a chance to talk about their work.  So, I hit upon an interview-style of podcast where I’d chat to other people from the worlds of: physiological computing, human-computer interaction, human factors psychology and related fields.  My plan is to record these most of these conversations “on the road” so I generally pack the microphone on my travels and hopefully I can grab enough people to put out one-per-month.  The first one is a conversation between myself and Thorsten Zander and you can find it at the link at the top of this page.

Share This:

The Log Roll of Intelligent Adaptation

I originally coined the term ‘physiological computing’ to describe a whole class of emerging technologies constructed around closed-loop control.  These technologies collected implicit measures from the brain and body of the user, which informed a process of intelligent adaptation at the user interface.

If you survey research in this field, from mental workload monitoring to applications in affective computing, there’s an overwhelming bias towards the first part of the closed-loop – the business of designing sensors, collecting data and classifying psychological states.  In contrast, you see very little on what happens at the interface once target states have been detected.  The dearth of work on intelligent adaptation is a problem because signal processing protocols and machine learning algorithms are being developed in a vacuum – without any context for usage.  This disconnect both neglects and negates the holistic nature of closed-loop control and the direct link between classification and adaptation.  We can even generate a maxim to describe the relationship between the two:

the number of states recognised by a physiological computing system should be minimum required to support the range of adaptive options that can be delivered at the interface

This maxim minimises the number of states to enhance classification accuracy, while making an explicit link between the act of measurement at the first part of the loop with the process of adaptation that is the last link in the chain.

If this kind of stuff sounds abstract or of limited relevance to the research community, it shouldn’t.  If we look at research into the classic ‘active’ BCI paradigm, there is clear continuity between state classification and corresponding actions at the interface.  This continuity owes its prominence to the fact that the BCI research community is dedicated to enhancing the lives of end users and the utility of the system lies at the core of their research process.  But to be fair, the link between brain activation and input control is direct and easy to conceptualise in the ‘active’ BCI paradigm.  For those systems that working on an implicit basis, detection of the target state is merely the jumping off point for a complicated process of user interface design.

Continue reading

Share This:

Intelligent Wearables

Accuracy is fundamental to the process of scientific measurement, we expect our gizmos and sensors to deliver data that is both robust and precise. If accurate data are available, reliable inferences can be made about whatever you happen to be measuring, these inferences inform understanding and prediction of future events. But absence of accuracy is disastrous, if we cannot trust the data then the rug is pulled out from under the scientific method.

Having worked as a psychophysiologist for longer than I care to remember, I’m acutely aware of this particular house of cards. Even if your ECG or SCL sensor is working perfectly, there are always artefacts that can affect data in a profound way: this participant had a double-espresso before they came to the lab, another is persistently and repeatedly scratching their nose. Psychophysiologists have to pay attention to data quality because the act of psychophysiological inference is far from straightforward*. In a laboratory where conditions are carefully controlled, these unwelcome interventions from the real-world are handled by a double strategy – first of all, participants are asked to sit still and refrain from excessive caffeine consumption etc., and if that doesn’t work, we can remove the artefacts from the data record by employing various forms of post-hoc analyses.

Working with physiological measures under real-world conditions, where people can drink coffee and dance around the table if they wish, presents a significant challenge for all the reasons just mentioned. So, why would anyone even want to do it? For the applied researcher, it’s a risk worth taking in order to get a genuine snapshot of human behaviour away from the artificialities of the laboratory. For people like myself, who are interested in physiological computing and using these data as inputs to technological systems, the challenge of accurate data capture in the real world is a fundamental issue. People don’t use technology in a laboratory, they use it out there in offices and cars and cafes and trains – and if we can’t get physiological computing systems to work ‘out there’ then one must question whether this form of technology is really feasible.

Continue reading

Share This: