[iframe width=”400″ height=”300″ src=”http://player.vimeo.com/video/32915393″]
Last month I gave a presentation at the Annual Meeting of the Human Factors and Ergonomics Society held at Leeds University in the UK. I stood on the podium and presented the work, but really the people who deserve most of the credit are Marjolein van der Zwaag (from Philips Research Laboratories) and my own PhD student at LJMU Elena Spiridon.
You can watch a podcast of the talk above. This work was originally conducted as part of the REFLECT project at the end of 2010. This work was inspired by earlier research on affective computing where the system makes an adaptation to alleviate a negative mood state. The rationale here is that any such adaptation will have beneficial effects – in terms of reducing duration/intensity of negative mood, and in doing so, will mitigate any undesirable effects on behaviour or the health of the person.
Our study was concerned with the level of anger a person might experience on the road. We know that anger causes ‘load’ on the cardiovascular system as well as undesirable behaviours associated with aggressive driver. In our study, we subjected participants to a simulated driving task that was designed to make them angry – this is a protocol that we have developed at LJMU. Marjolein was interested in the effects of different types of music on the cardiovascular system while the person is experiencing a negative mood state; for our study, she created four categories of music that varied in terms of high/low activation and positive/negative valence.
The study does not represent an investigation into a physiological computing system per se, but is rather a validation study to explore whether an adaptation, such as selecting a certain type of music when a person is angry, can have beneficial effects. We’re working on a journal paper version at the moment.
[iframe width=”400″ height=”300″ src=”http://player.vimeo.com/video/25081038″]
Some months ago, I wrote this post about the REFLECT project that we participated in for the last three years. In short, the REFLECT project was concerned with research and development of three different kinds of biocybernetic loops: (1) detection of emotion, (2) diagnosis of mental workload, and (3) assessment of physical comfort. Psychophysiological measures were used to assess (1) and (2) whilst physical movement (fidgeting) in a seated position was used for the latter. And this was integrated into the ‘cockpit’ of a Ferrari.
The idea behind the emotional loop was to have the music change in response to emotion (to alleviate negative mood states). The cognitive loop would block incoming calls if the driver was in a state of high mental workload and air-filled bladders in the seat would adjust to promote physical comfort. You can read all about the project here. Above you’ll find a promotional video that I’ve only just discovered – the reason for my delayed response in posting this is probably vanity, the filming was over before I got to the Ferrari site in Maranello. The upside of my absence is that you can watch the much more articulate and handsome Dick de Waard explain about the cognitive loop in the film, which was our main involvement in the project.
The deadline for submissions to this special session has been extended to May 20th
Anton Nijholt from University of Twente and Rob Jacob from Tufts University are organizing a special session at ICMI 2011 on “BCI and Multimodality”. All ICMI sessions, including the special sessions, are plenary. Hence, having a special session during the ICMI conference means that there is the opportunity to address a broad audience and make them aware of new developments and special topics. Clearly, if we look at BCI for non-medical applications a multimodal approach is natural. We can make use of knowledge about user, task, and context. Part of this information is available in advance, part of the information becomes available on-line in addition to EEG or fNIRS measured brain activity. The intended user is not disabled, he or she can use other modalities to pass commands and preferences to the system, and the system may also have information obtained from monitoring the mental state of the user. Moreover, it may be the case that different BCI paradigms can be employed in parallel or sequentially in multimodal (or hybrid) BCI applications.
Workshop at ACII 2011
The second workshop on affective brain-computer interfaces will explore the advantages and limitations of using neuro-physiological signals as a modality for the automatic recognition of affective and cognitive states, and the possibilities of using this information about the user state in innovative and adaptive applications. The goal is to bring researchers from the communities of brain computer interfacing, affective computing, neuro-ergonomics, affective and cognitive neuroscience together to present state-of-the-art progress and visions on the various overlaps between those disciplines.
It has been said that every cloud has a silver lining and the only positive from chronic jet lag (Kiel and I arrived in Vancouver yesterday for the CHI workshop) is that it does give you a chance to catch up with overdue tasks. This is a post I’d been meaning to write for several weeks about my involvement in the REFLECT project.
For the last three years, our group at LJMU have been working on a collaborative project called REFLECT funded by the EU Commission under the Future and Emerging Technology Initiative. This project was centred around the concept of “reflective software” that responds implicitly to changes in user needs and in real-time. A variety of physiological sensors are applied to the user in order to inform this kind of reflective adaptation. So far, this is regular fare for anyone who’s read this blog before, being a standard set-up for a biocybernetic adaptation system.
First of all, an apology – Kiel and I try to keep this blog ticking over, but for most of 2011, we’ve been preoccupied with a couple of large projects and getting things organised for the CHI workshop in May. One of the “things” that led to this hiatus on the blog is a new research project funded by the EU called ARtSENSE, which is the topic of this post.
From the point of view of an outsider, the utility and value of computer technology that provides emotional feedback to the human operator is questionable. The basic argument normally goes like this: even if the technology works, do I really need a machine to tell me that I’m happy or angry or calm or anxious or excited? First of all, the feedback provided by this machine would be redundant, I already have a mind/body that keeps me fully appraised of my emotional status – thank you. Secondly, if I’m angry or frustrated, do you really think I would helped in any way by a machine that drew my attention to these negative emotions, actually that would be particularly annoying. Finally, sometimes I’m not quite sure how I’m feeling or how I feel about something; feedback from a machine that says you’re happy or angry would just muddy the waters and add further confusion.
The UK version of Wired magazine ran an article in last month’s edition (no online version available) about Emotiv and the development of the EPOC headset. Much of the article focused on the human side of the story, the writer mixed biographical details of company founders with how the ideas driving the development of the headset came together. I’ve written about Emotiv before here on a specific technical issue. I still haven’t had any direct experience of the system, but I’d like to write about the EPOC again because it’s emerging as the headset of choice for early adopters.
In this article, I’d like to discuss a number of dilemmas that are faced by both the company and their customers. These issues aren’t specific to Emotiv, they hold for other companies in the process of selling/developing hardware for physiological computing systems.
This is a short post to inform regular readers that I’ve made some changes to the FAQ document for the site (link to the left). Normally people alter the FAQ because the types of popular questions have changed. In our case, it is my answers to those questions that have changed in the time since I wrote my original responses – hence the need to revise the FAQ.
The original document firmly identified physiological computing with affective computing/biocybernetic adaptation. There was even a question making a firm division between BCI technology and physiological computing. In the revised FAQ, I’ve dumped this distinction and attempted to view BCI as part of a broad continuum of computing devices that rely on real-time physiological data for input. This change has not been made to arrogantly subsume BCI within the physiological computing spectrum, but to reconcile perspectives from different research communities working on common measures and technologies across different application domains. In my opinion, the distinction between research topics and application domains (including my own) are largely artificial and the advancement of this technology is best served by keeping an open mind about mash-ups and hybrid systems.
I’ve also expanded the list of indicative references to include contributions from BCI, telemedicine and adaptive automation in order to highlight the breadth of applications that are united by physiological data input.
The FAQ is written to support the naive reader, who may have stumbled across our site, but as ever, I welcome any comments or additional questions from domain experts.