In this third episode of the podcast, I talk to Prof. Wendy Rogers from the University of Illinois about her work as Director of the Human Factors and Aging Laboratory. Our conversation took place in October 2018 and we talk about designing technology to support everyday activities of older adults. Wendy’s work covers a huge range of topics from measuring cognitive skills across the lifespan to understanding the process of technology adoption and acceptance. We talk about the relationship between technology and ageing and how older users are currently at the vanguard of emerging systems, from smart homes to social robots. We discuss whether the process of technology adoption is different for older versus younger users. We also talk about building social relationships between older users and robots designed to care for them.
My conversation with Dr. Alan Pope is now available from the Podcast link at the top of this page. Alan’s seminal work on the biocybernetic loop was a key inspiration for developing a concept of physiological computing. He was probably the first person to take measures from the brain and body and use them in real-time to allow the operator to implicitly communicate with technology. Our conversation takes in the whole of his career from early work with evoked cortical potentials in clinical psychology to his move to NASA Langley and work in the field of human factors and aviation psychology
I first got the idea to do a podcast back in the early part of the year. Like many other academics, I enjoy the informal conversations that often happen over coffee and in the bar during a conference or meeting – and I wanted to capture those sorts of exchanges whilst giving people a chance to talk about their work. So, I hit upon an interview-style of podcast where I’d chat to other people from the worlds of: physiological computing, human-computer interaction, human factors psychology and related fields. My plan is to record these most of these conversations “on the road” so I generally pack the microphone on my travels and hopefully I can grab enough people to put out one-per-month. The first one is a conversation between myself and Thorsten Zander and you can find it at the link at the top of this page.
I originally coined the term ‘physiological computing’ to describe a whole class of emerging technologies constructed around closed-loop control. These technologies collected implicit measures from the brain and body of the user, which informed a process of intelligent adaptation at the user interface.
If you survey research in this field, from mental workload monitoring to applications in affective computing, there’s an overwhelming bias towards the first part of the closed-loop – the business of designing sensors, collecting data and classifying psychological states. In contrast, you see very little on what happens at the interface once target states have been detected. The dearth of work on intelligent adaptation is a problem because signal processing protocols and machine learning algorithms are being developed in a vacuum – without any context for usage. This disconnect both neglects and negates the holistic nature of closed-loop control and the direct link between classification and adaptation. We can even generate a maxim to describe the relationship between the two:
the number of states recognised by a physiological computing system should be minimum required to support the range of adaptive options that can be delivered at the interface
This maxim minimises the number of states to enhance classification accuracy, while making an explicit link between the act of measurement at the first part of the loop with the process of adaptation that is the last link in the chain.
If this kind of stuff sounds abstract or of limited relevance to the research community, it shouldn’t. If we look at research into the classic ‘active’ BCI paradigm, there is clear continuity between state classification and corresponding actions at the interface. This continuity owes its prominence to the fact that the BCI research community is dedicated to enhancing the lives of end users and the utility of the system lies at the core of their research process. But to be fair, the link between brain activation and input control is direct and easy to conceptualise in the ‘active’ BCI paradigm. For those systems that working on an implicit basis, detection of the target state is merely the jumping off point for a complicated process of user interface design.
Accuracy is fundamental to the process of scientific measurement, we expect our gizmos and sensors to deliver data that is both robust and precise. If accurate data are available, reliable inferences can be made about whatever you happen to be measuring, these inferences inform understanding and prediction of future events. But absence of accuracy is disastrous, if we cannot trust the data then the rug is pulled out from under the scientific method.
Having worked as a psychophysiologist for longer than I care to remember, I’m acutely aware of this particular house of cards. Even if your ECG or SCL sensor is working perfectly, there are always artefacts that can affect data in a profound way: this participant had a double-espresso before they came to the lab, another is persistently and repeatedly scratching their nose. Psychophysiologists have to pay attention to data quality because the act of psychophysiological inference is far from straightforward*. In a laboratory where conditions are carefully controlled, these unwelcome interventions from the real-world are handled by a double strategy – first of all, participants are asked to sit still and refrain from excessive caffeine consumption etc., and if that doesn’t work, we can remove the artefacts from the data record by employing various forms of post-hoc analyses.
Working with physiological measures under real-world conditions, where people can drink coffee and dance around the table if they wish, presents a significant challenge for all the reasons just mentioned. So, why would anyone even want to do it? For the applied researcher, it’s a risk worth taking in order to get a genuine snapshot of human behaviour away from the artificialities of the laboratory. For people like myself, who are interested in physiological computing and using these data as inputs to technological systems, the challenge of accurate data capture in the real world is a fundamental issue. People don’t use technology in a laboratory, they use it out there in offices and cars and cafes and trains – and if we can’t get physiological computing systems to work ‘out there’ then one must question whether this form of technology is really feasible.
Back in 2003, Lawrence Hettinger and colleagues penned this paper on the topic of neuroadaptive interface technology. This concept described a closed-loop system where fluctuations in cognitive activity or emotional state informs the functional characteristics of an interface. The core concept sits comfortably with a host of closed-loop technologies in the domain of physiological computing.
One great insight from this 2003 paper was to describe how neuroadaptive interfaces could enhance communication between person and system. They argued that human-computer interaction currently existed in an asymmetrical form. The person can access a huge amount of information about the computer system (available RAM, number of active operations) but the system is fundamentally ‘blind’ to the intentions of the user or their level of mental workload, frustration or fatigue. Neuroadaptive interfaces would enable symmetrical forms of human-computer interaction where technology can respond to implicit changes in the human nervous system, and most significantly, interpret those covert sources of data in order to inform responses at the interface.
Allowing humans to communicate implicitly with machines in this way could enormously increase the efficiency of human-computer interaction with respects to ‘bits per second’. The keyboard, mouse and touchscreen remain the dominant modes of input control by which we translate thoughts into action in the digital realm. We communicate with computers via volitional acts of explicit perceptual-motor control – the same asymmetrical/explicit model of HCI holds true for naturalistic modes of input control, such as speech and gestures. The concept of a symmetrical HCI based on implicit signals that are generated spontaneously and automatically by the user represents a significant shift from conventional modes of input control.
This recent paper published in PNAS by Thorsten Zander and colleagues provides a demonstration of a symmetrical, neuroadaptive interface in action.
The School of Natural Sciences and Psychology, in partnership with the Department of Computer Science and General Engineering Research Institute, are working on adaptive technologies in the area of physiological computing. This studentship is co-funded by Emteq Ltd: emteq.net Applications are invited for a three-year full studentship in this field of research. The studentship includes tuition fees (currently £4,100 per annum) plus a tax-free maintenance stipend (currently £14,296 per annum). Applicants must be UK/EU nationals. The programme of research is concerned with automatic recognition of emotional states based on measurements of facial electromyography (fEMG) and autonomic activity. The ability of these measures to successfully differentiate positive and negative emotional states will be explored by developing mood induction protocols in virtual reality (VR). Successful applicants will conduct research into the development of adaptive/affective VR scenarios designed to maximise the effectiveness of mood induction.
For full details, click this link
Closing Date for applications: Friday 3rd March 2017
The first Neuroadaptive Technology Conference will take place in Berlin on the 19th-21st July 2017. Details will appear at the conference website. Authors are invited to submit abstracts by the 13th March 2017 at the conference website.
The October 2015 edition of IEEE Computer magazine is devoted to the topic of Physiological Computing. Giulio Jacucci, myself and Erin Solovey acted as co-editors and the introduction for the magazine is available here.
The paper included in the special issue cover a range of topics, including: measurement of stress in VR, combining pupilometry with EEG to detect changes in operator workload and using mobile neuroimaging to create attention-aware technologies.
There is also a podcast associated with the SI featuring the guest editors in conversation with Robert Jacobs from Tufts University on current topics and future directions in Physiological Computing – you can hear it here.
I am one of the co-editors of a special issue of the Interacting With Computers, which is now available online here. The title for the special issue is Physiological Computing for Intelligent Adaptation, it contains five full research papers covering a range of topics such as: use of VR for stress reduction, mental workload monitoring and a comparison of EEG headsets.