Neuroadaptive Technology Conference 2019

 

The international conference on Neuroadaptive Technology will be held on the 16-18th July 2019 in Liverpool. This will be the second meeting on this topic, the first took place in Berlin two years ago. You’ll find a link at the top of this page for the schedule, registration costs and other details about the meeting.

In this short post, I’d like to give a little background for the meeting and say some things about the goals and scope of the conference. The original idea came from a conversation between myself and Thorsten Zander (my co-organiser) about the absence of any forum dedicated to this type of implicit closed-loop technology. My work on physiological computing systems was always multidisciplinary, encompassing: psychological sciences, wearable sensors, signal processing and human-computer interaction. Work in the area was being submitted and published at conferences dedicated to engineering and computer science, but these meetings always emphasised one specific aspect, such as sensors or signal processing aspects or machine learning. I wanted to have a meeting where all aspect of the closed loop were equally represented, from sensors through to interface design. On the other hand, Thorsten had developed concept of passive brain-computer interfaces where EEG signals were translated into control at the interface without any intentionality on the part of the user.

We had at least two things in common, we were both interested in closed-loop control using signals from the brain and the body and we were both frustrated that our work didn’t seem to fit comfortably into existing forums.

Thorsten took the first step and organised a passive BCI meeting at Demelhorst just outside Bremen for two (very hot) days in August 2014. On the last day of that meeting, along with the other attendees, we batted around various names with which to christen this emerging area of work. If memory serves, I don’t remember anyone coming up with a label that everyone in the room was completely endorsed. The term ‘neuroadaptive technology’ that I appropriated from this 2003 paper from Lawrence Hettinger and colleagues was the one that people were the least unhappy about – and so, when it came time to organise the first conference, that was the name that we ran with.

From the beginning, we decided to make the ‘neuro’ in the title of the conference as broad as possible, encompassing psychophysiological sensors/measures as well as those derived from neurophysiology. At that first conference, we also wanted to draw attention to the breadth of work in this field and so we invited Rob Jacob as a keynote to talk about new modes of human-computer interaction and Pim Haselager to address the ethical implications of the technology, as well as speakers on EEG signal processing. A full list of abstracts and the schedule for that 2017 meeting is available here.

The fundamental thinking behind the neuroadaptive technology conference is that despite the significant range of applications under consideration in this field, which runs from autonomous driving to marketing, researchers share a significant number of interests, such: sensor design, signal processing methods in the field, machine learning for classification, designing implicit modes of human-computer interaction, establishing methodology for evaluation – and that’s far from an exhaustive list.

And so, in Liverpool this July, we’ll be doing it all again with a wide range of speakers from around the world. The deadline for abstract submission is 31st March 2019 and we’re in the process of organising keynote speakers and a clear route to publication for the work presented at the conference.

Full details will appear at the link from the top of this page over the next few months.

Share This:

episode 3 of the mind machine is here

 

In this third episode of the podcast, I talk to Prof. Wendy Rogers from the University of Illinois about her work as Director of the Human Factors and Aging Laboratory.  Our conversation took place in October 2018 and we talk about designing technology to support everyday activities of older adults.  Wendy’s work covers a huge range of topics from measuring cognitive skills across the lifespan to understanding the process of technology adoption and acceptance.  We talk about the relationship between technology and ageing and how older users are currently at the vanguard of emerging systems, from smart homes to social robots.  We discuss whether the process of technology adoption is different for older versus younger users.  We also talk about building social relationships between older users and robots designed to care for them.

Share This:

episode 2 of the mind machine now available

My conversation with Dr. Alan Pope is now available from the Podcast link at the top of this page.  Alan’s seminal work on the biocybernetic loop was a key inspiration for developing a concept of physiological computing.  He was probably the first person to take measures from the brain and body and use them in real-time to allow the operator to implicitly communicate with technology.  Our conversation takes in the whole of his career from early work with evoked cortical potentials in clinical psychology to his move to NASA Langley and work in the field of human factors and aviation psychology

Share This:

announcing the mind machine podcast

I first got the idea to do a podcast back in the early part of the year.  Like many other academics, I enjoy the informal conversations that often happen over coffee and in the bar during a conference or meeting – and I wanted to capture those sorts of exchanges whilst giving people a chance to talk about their work.  So, I hit upon an interview-style of podcast where I’d chat to other people from the worlds of: physiological computing, human-computer interaction, human factors psychology and related fields.  My plan is to record these most of these conversations “on the road” so I generally pack the microphone on my travels and hopefully I can grab enough people to put out one-per-month.  The first one is a conversation between myself and Thorsten Zander and you can find it at the link at the top of this page.

Share This:

The Log Roll of Intelligent Adaptation

I originally coined the term ‘physiological computing’ to describe a whole class of emerging technologies constructed around closed-loop control.  These technologies collected implicit measures from the brain and body of the user, which informed a process of intelligent adaptation at the user interface.

If you survey research in this field, from mental workload monitoring to applications in affective computing, there’s an overwhelming bias towards the first part of the closed-loop – the business of designing sensors, collecting data and classifying psychological states.  In contrast, you see very little on what happens at the interface once target states have been detected.  The dearth of work on intelligent adaptation is a problem because signal processing protocols and machine learning algorithms are being developed in a vacuum – without any context for usage.  This disconnect both neglects and negates the holistic nature of closed-loop control and the direct link between classification and adaptation.  We can even generate a maxim to describe the relationship between the two:

the number of states recognised by a physiological computing system should be minimum required to support the range of adaptive options that can be delivered at the interface

This maxim minimises the number of states to enhance classification accuracy, while making an explicit link between the act of measurement at the first part of the loop with the process of adaptation that is the last link in the chain.

If this kind of stuff sounds abstract or of limited relevance to the research community, it shouldn’t.  If we look at research into the classic ‘active’ BCI paradigm, there is clear continuity between state classification and corresponding actions at the interface.  This continuity owes its prominence to the fact that the BCI research community is dedicated to enhancing the lives of end users and the utility of the system lies at the core of their research process.  But to be fair, the link between brain activation and input control is direct and easy to conceptualise in the ‘active’ BCI paradigm.  For those systems that working on an implicit basis, detection of the target state is merely the jumping off point for a complicated process of user interface design.

Continue reading

Share This:

Intelligent Wearables

Accuracy is fundamental to the process of scientific measurement, we expect our gizmos and sensors to deliver data that is both robust and precise. If accurate data are available, reliable inferences can be made about whatever you happen to be measuring, these inferences inform understanding and prediction of future events. But absence of accuracy is disastrous, if we cannot trust the data then the rug is pulled out from under the scientific method.

Having worked as a psychophysiologist for longer than I care to remember, I’m acutely aware of this particular house of cards. Even if your ECG or SCL sensor is working perfectly, there are always artefacts that can affect data in a profound way: this participant had a double-espresso before they came to the lab, another is persistently and repeatedly scratching their nose. Psychophysiologists have to pay attention to data quality because the act of psychophysiological inference is far from straightforward*. In a laboratory where conditions are carefully controlled, these unwelcome interventions from the real-world are handled by a double strategy – first of all, participants are asked to sit still and refrain from excessive caffeine consumption etc., and if that doesn’t work, we can remove the artefacts from the data record by employing various forms of post-hoc analyses.

Working with physiological measures under real-world conditions, where people can drink coffee and dance around the table if they wish, presents a significant challenge for all the reasons just mentioned. So, why would anyone even want to do it? For the applied researcher, it’s a risk worth taking in order to get a genuine snapshot of human behaviour away from the artificialities of the laboratory. For people like myself, who are interested in physiological computing and using these data as inputs to technological systems, the challenge of accurate data capture in the real world is a fundamental issue. People don’t use technology in a laboratory, they use it out there in offices and cars and cafes and trains – and if we can’t get physiological computing systems to work ‘out there’ then one must question whether this form of technology is really feasible.

Continue reading

Share This:

Neuroadaptive Technology as Symmetrical Human-Computer Interaction

Back in 2003, Lawrence Hettinger and colleagues penned this paper on the topic of neuroadaptive interface technology. This concept described a closed-loop system where fluctuations in cognitive activity or emotional state informs the functional characteristics of an interface. The core concept sits comfortably with a host of closed-loop technologies in the domain of physiological computing.

One great insight from this 2003 paper was to describe how neuroadaptive interfaces could enhance communication between person and system. They argued that human-computer interaction currently existed in an asymmetrical form. The person can access a huge amount of information about the computer system (available RAM, number of active operations) but the system is fundamentally ‘blind’ to the intentions of the user or their level of mental workload, frustration or fatigue. Neuroadaptive interfaces would enable symmetrical forms of human-computer interaction where technology can respond to implicit changes in the human nervous system, and most significantly, interpret those covert sources of data in order to inform responses at the interface.

Allowing humans to communicate implicitly with machines in this way could enormously increase the efficiency of human-computer interaction with respects to ‘bits per second’. The keyboard, mouse and touchscreen remain the dominant modes of input control by which we translate thoughts into action in the digital realm. We communicate with computers via volitional acts of explicit perceptual-motor control – the same asymmetrical/explicit model of HCI holds true for naturalistic modes of input control, such as speech and gestures. The concept of a symmetrical HCI based on implicit signals that are generated spontaneously and automatically by the user represents a significant shift from conventional modes of input control.

This recent paper published in PNAS by Thorsten Zander and colleagues provides a demonstration of a symmetrical, neuroadaptive interface in action.

Continue reading

Share This:

Funded PhD studentship on Physiological Computing and VR

The School of Natural Sciences and Psychology, in partnership with the Department of Computer Science and General Engineering Research Institute, are working on adaptive technologies in the area of physiological computing. This studentship is co-funded by Emteq Ltd: emteq.net Applications are invited for a three-year full studentship in this field of research. The studentship includes tuition fees (currently £4,100 per annum) plus a tax-free maintenance stipend (currently £14,296 per annum). Applicants must be UK/EU nationals. The programme of research is concerned with automatic recognition of emotional states based on measurements of facial electromyography (fEMG) and autonomic activity. The ability of these measures to successfully differentiate positive and negative emotional states will be explored by developing mood induction protocols in virtual reality (VR). Successful applicants will conduct research into the development of adaptive/affective VR scenarios designed to maximise the effectiveness of mood induction.

For full details, click this link

Closing Date for applications: Friday 3rd March 2017

Share This:

Neuroadaptive Technology Conference 2017

neuroadaptive_technology_conference_2017

The first Neuroadaptive Technology Conference will take place in Berlin on the 19th-21st July 2017. Details will appear at the conference website. Authors are invited to submit abstracts by the 13th March 2017 at the conference website.

Share This:

IEEE Computer Special Issue on Physiological Computing

intro_pdf__page_2_of_5_

The October 2015 edition of IEEE Computer magazine is devoted to the topic of Physiological Computing.  Giulio Jacucci, myself and Erin Solovey acted as co-editors and the introduction for the magazine is available here.

The paper included in the special issue cover a range of topics, including: measurement of stress in VR, combining pupilometry with EEG to detect changes in operator workload and using mobile neuroimaging to create attention-aware technologies.

There is also a podcast associated with the SI featuring the guest editors in conversation with Robert Jacobs from Tufts University on current topics and future directions in Physiological Computing – you can hear it here.

Share This: