The third conference on Neuroadaptive Technology (NAT’23) is scheduled to take place later this year, on October 10th-12th to be precise. Thanks to the pandemic, it’s been almost three years since our last event in Liverpool, the proceedings of which were published at the end of last year. Because of the enforced break, my co-organiser Thorsten Zander and myself have thought carefully about the format and focus of NAT’23 and how the meeting can advance the field and be as useful as possible to the research community.
The format of the first two meetings took the traditional format of a small academic conference with a single session of speaker presentations over two-and-a-half days. We also invited many keynote speakers (relative to the size of the conference) as a strategy to draw together disparate strands of research related to NAT, from Brain-Computer Interfaces to Neuroergonomics, taking in technical issues like EEG Signal Processing and societal implications around Ethics and Privacy. This range of research that pertains to NAT is reflected in our edited collection, which includes contributions to the 2019 conference, abstracts from both 2017 and 2019 meetings, and a couple of review chapters to define NAT and question how the development of this technology can enhance human-computer interaction. Having constructed this ‘base’ (so to speak), we plan to turn our attention to next steps as we look to the 2023 meeting and beyond.
NAT is now well-defined as a closed-loop neurotechnology that is capable of real-time neurophysiological monitoring and delivering implicit forms of human-computer interaction. This technology has widespread applications, from safety-critical performance to entertainment and digital health. This closed-loop neurotechnology requires expertise from various disciplines, such as, neuroscience, engineering, mathematics, and computer science. And that is a fundamental challenge for this conference, because conferences are built around research communities (and vice versa), and those communities are generally defined by convergence, with respect to topics, theories, or methods. Whereas the type of multidisciplinary research that is the norm for NAT is characterised by the divergent methods being applied in the service of a common goal or question.
Therefore, we are pushing for a closer connection between people working on NAT and the development of NAT applications, and researchers in the field of AI. We believe that AI is fundamental to how NAT will develop and evolve over the next ten to fifteen years – and we’re looking to start that dialogue and convergence with the 2023 NAT Conference.
You can find more details about the conference and how to register here.
A collection of chapters and abstracts from both 2017 and 2019 Neuroadaptive Technology conferences were published last week by Elsevier, edited by myself and Thorsten Zander. You can find full details about the content of the book directly from the Elsevier website here.
I’ve reproduced the preface from the collection below to give a sense of what the collection is about and how the book and the conferences came to be.
“Back in the late noughties, there was a period of intensified interest in the concept of a brain-computer interfaces (BCI). The idea of using real-time measures of brain activity to communicate directly with a computer was nothing new, but BCI research (at the time) was generally constrained to medical applications and clinical groups. The primary impetus for increased interest in the topic was the idea that people without clinical conditions could utilise BCIs, which stimulated enormous discussion about what kinds of applications were possible. There was a feeling back then that BCI research, which had been highly niche and specialised, was spilling over into the relative mainstream of human-computer interaction.
Those early conference sessions and workshops were dominated by research on active BCI, where neurophysiological signals represent active intentions and serve as a proxy for an input control device like a mouse or a joystick. We were in a minority during those early sessions because we were both working on system concepts where physiology was monitored implicitly, and the interface adapted with no requirement for active cognition on behalf of the user. One of us had developed the existing concept of physiological computing into a broad category of technology where systems adapted to implicit changes in neurophysiology and psychophysiology, united by the cybernetic concept of closed-loop control. The other had extended the concept of a BCI to include an approach called passive BCI, which implicitly monitored the user and allowed the system to develop context awareness.
When we talked, we discovered that we had at least two things in common, we were both interested in closed-loop control using signals from the brain and the body and we experienced a shared frustration that our work didn’t fit comfortably into existing conference forums. As we both regularly attended a range of meetings, we could see work on this topic of implicit monitoring being submitted and presented, but presentations often emphasised one specific aspect, such as sensors or signal processing aspects or machine learning, which depended on the specific focus of the conference. We both felt that the ‘passive’ approach was important enough to warrant its own scientific meeting where all aspects of the closed loop design were equally represented, from sensors to signal treatment through to interface design and evaluation of the user experience.
In this third episode of the podcast, I talk to Prof. Wendy Rogers from the University of Illinois about her work as Director of the Human Factors and Aging Laboratory. Our conversation took place in October 2018 and we talk about designing technology to support everyday activities of older adults. Wendy’s work covers a huge range of topics from measuring cognitive skills across the lifespan to understanding the process of technology adoption and acceptance. We talk about the relationship between technology and ageing and how older users are currently at the vanguard of emerging systems, from smart homes to social robots. We discuss whether the process of technology adoption is different for older versus younger users. We also talk about building social relationships between older users and robots designed to care for them.
My conversation with Dr. Alan Pope is now available from the Podcast link at the top of this page. Alan’s seminal work on the biocybernetic loop was a key inspiration for developing a concept of physiological computing. He was probably the first person to take measures from the brain and body and use them in real-time to allow the operator to implicitly communicate with technology. Our conversation takes in the whole of his career from early work with evoked cortical potentials in clinical psychology to his move to NASA Langley and work in the field of human factors and aviation psychology
I first got the idea to do a podcast back in the early part of the year. Like many other academics, I enjoy the informal conversations that often happen over coffee and in the bar during a conference or meeting – and I wanted to capture those sorts of exchanges whilst giving people a chance to talk about their work. So, I hit upon an interview-style of podcast where I’d chat to other people from the worlds of: physiological computing, human-computer interaction, human factors psychology and related fields. My plan is to record these most of these conversations “on the road” so I generally pack the microphone on my travels and hopefully I can grab enough people to put out one-per-month. The first one is a conversation between myself and Thorsten Zander and you can find it at the link at the top of this page.
The School of Natural Sciences and Psychology, in partnership with the Department of Computer Science and General Engineering Research Institute, are working on adaptive technologies in the area of physiological computing. This studentship is co-funded by Emteq Ltd: emteq.net Applications are invited for a three-year full studentship in this field of research. The studentship includes tuition fees (currently £4,100 per annum) plus a tax-free maintenance stipend (currently £14,296 per annum). Applicants must be UK/EU nationals. The programme of research is concerned with automatic recognition of emotional states based on measurements of facial electromyography (fEMG) and autonomic activity. The ability of these measures to successfully differentiate positive and negative emotional states will be explored by developing mood induction protocols in virtual reality (VR). Successful applicants will conduct research into the development of adaptive/affective VR scenarios designed to maximise the effectiveness of mood induction.
For full details, click this link
Closing Date for applications: Friday 3rd March 2017
The first Neuroadaptive Technology Conference will take place in Berlin on the 19th-21st July 2017. Details will appear at the conference website. Authors are invited to submit abstracts by the 13th March 2017 at the conference website.