The act of paying attention or sustaining concentration is a good example of everyday cognition. We all know the difference between an attentive state of being, when we are utterly focused and seem to absorb every ‘bit’ of information, and the diffuse experience of mind-wandering where consciousness flits from one random topic to the next. Understanding this distinction is easy but the act of regulating the focus of attention can be a real challenge, especially if you didn’t get enough sleep or you’re not particularly interested in the task at hand. Ironically if you are totally immersed in a task, attention is absorbed to the extent that you don’t notice your clarity of focus. At the other extreme, if you begin to day-dream, registering any awareness of your inattentive state is very unlikely.
The capacity to self-regulate attentional focus is an important skill for many people, from the executives who sit in long meetings where important decisions are made to air traffic controllers, pilots, truck drivers and other professionals for whom the ability to concentrate has real consequences for the safety of themselves and others.
Technology can play a role in developing the capacity to regulate attentional focus. The original biocybernetic loop developed at NASA was an example of how to incorporate a neurofeedback mechanism into the cockpit in order to ensure a level of awareness that was conducive with safe performance. There are two components within type of system: real-time analysis of brain activity as a proxy of attention and translation of these data into ‘live’ feedback to the user. The availability of explicit, real-time feedback on attentional state acts as an error signal to indicate the loss of concentration.
This article will tell a tale of two cultures, an academic paper that updates biocybernetic control of attention via real-time fMRI and a kickstarter project where the loop is encapsulated within a wearable device.
The phrase “smart technology” has been around for a long time. We have smart phones and smart televisions with functional capability that is massively enhanced by internet connectivity. We also talk about smart homes that scale up into smart cities. This hybrid between technology and the built environment promotes connectivity but with an additional twist – smart spaces monitor activity within their confines for the purposes of intelligent adaptation: to switch off lighting and heating if a space is uninhabited, to direct music from room to room as the inhabitant wanders through the house.
If smart technology is equated with enhanced connectivity and functionality, do those things translate into an increase of machine intelligence? In his 2007 book ‘The Design Of Future Things‘, Donald Norman defined the ‘smartness’ of technology with respect to the way in which it interacted with the human user. Inspired by J.C.R. Licklider’s (1960) definition of man-computer symbiosis, he claimed that smart technology was characterised by a harmonious partnership between person and machine. Hence, the ‘smartness’ of technology is defined by the way in which it responds to the user and vice versa.
One prerequisite for a relationship between person and machine that is cooperative and compatible is to enhance the capacity of technology to monitor user behaviour. Like any good butler, the machine needs to increase its awareness and understanding of user behaviour and user needs. The knowledge gained via this process can subsequently be deployed to create intelligent forms of software adaptation, i.e. machine-initiated responses that are both timely and intuitive from a human perspective. This upgraded form of human-computer interaction is attractive to technology providers and their customers, but is it realistic and achievable and what practical obstacles must be overcome?
When I first heard the term ‘brain-to-brain interfaces’, my knee-jerk response was – don’t we already have those? Didn’t we used to call them people? But sarcasm aside, it was clear that a new variety of BCI technology had arrived, complete with its own corporate acronym ‘B2B.’
For those new to the topic, brain-to-brain interfaces represent an amalgamation of two existing technologies. Input is represented by volitional changes in the EEG activity of the ‘sender’ as would be the case for any type of ‘active’ BCI. This signal is converted into an input signal for a robotised version of transcrannial magnetic stimulation (TMS) placed at a strategic location on the head of the ‘receiver.’
TMS works by discharging an electrical current in brief pulses via a stimulating coil. These pulses create a magnetic field that induces an electrical current in the surface of the cortex that is sufficiently strong to induce neuronal depolarisation. Because activity in the brain beneath the coil is directly modulated by this current, TMS is capable of inducing specific types of sensory phenomena or behaviour. You can find an introduction to TMS here (it’s an old pdf but freely available).
A couple of papers were published in PLOS One at the end of last year describing two distinct types of brain-to-brain interface between humans.
I’ve written a couple of posts about the Emotiv EPOC over the years of doing the blog, from user interface issues in this post and the uncertainties surrounding the device for customers and researchers here.
The good news is that research is starting to emerge where the EPOC has been systematically compared to other devices and perhaps some uncertainties can be resolved. The first study comes from the journal Ergonomics from Ekandem et al and was published in 2012. You can read an abstract here (apologies to those without a university account who can’t get behind the paywall). These authors performed an ergonomic evaluation of both the EPOC and the NeuroSky MindWave. Data was obtained from 11 participants, each of whom wore either a Neurosky or an EPOC for 15min on different days. They concluded that there was no clear ‘winner’ from the comparison. The EPOC has 14 sites compared to the single site used by the MindWave hence it took longer to set up and required more cleaning afterwards (and more consumables). No big surprises there. It follows that signal acquisition was easier with the MindWave but the authors report that once the EPOC was connected and calibrated, signal quality was more consistent than the MindWave despite sensor placement for the former being obstructed by hair.
First of all, apologies for our blog “sabbatical” – the important thing is that we are now back with news of our latest research collaboration involving FACT (Foundation for Art and Creative Technology) and international artists’ collective Manifest.AR.
To quickly recap, our colleagues at FACT were keen to create a new commission tapping into the use of augmented reality technology and incorporating elements of our own work on physiological computing. Our last post (almost a year ago now to our shame) described the time we spent with Manfest.AR last summer and our show-and-tell event at FACT. Fast-forward to the present and the Manifest.AR piece called Invisible ARtaffects opened last Thursday as part of the Turning FACT Inside Out show.
With regards to the development of physiological computing systems, whether they are BCI applications or fall into the category of affective computing, there seems (to me) to be two distinct types of research community at work. The first (and oldest) community are university-based academics, like myself, doing basic research on measures, methods and prototypes with the primary aim of publishing our work in various conferences and journals. For the most part, we are a mixture of psychologists, computer scientists and engineers, many of whom have an interest in human-computer interaction. The second community formed around the availability of commercial EEG peripherals, such as the Emotiv and Neurosky. Some members of this community are academics and others are developers, I suspect many are dedicated gamers. They are looking to build applications and hacks to embellish interactive experience with a strong emphasis on commercialisation.
There are many differences between the two groups. My own academic group is ‘old-school’ in many ways, motivated by research issues and defined by the usual hierarchies associated with specialisation and rank. The newer group is more inclusive (the tag-line on the NeuroSky site is “Brain Sensors for Everyone”); they basically want to build stuff and preferably sell it.
Way back in 2008, I was due to go to Florence to present at a workshop on affective BCI as part of CHI. In the event, I was ill that morning and missed the trip and the workshop. As I’d prepared the presentation, I made a podcast for sharing with the workshop attendees. I dug it out of the vaults for this post because gaming and physiological computing is such an interesting topic.
The work is dated now, but basically I’m drawing a distinction between my understanding of BCI and biocybernetic adaptation. The former is an alternative means of input control within the HCI, the latter can be used to adapt the nature of the HCI. I also argue that BCI is ideally suited certain types of game mechanics because it will not work 100% of the time. I used the TV series “Heroes” to illustrate these kinds of mechanics, which I regret in hindsight, because I totally lost all enthusiasm for that show after series 1.
The original CHI paper for this presentation is available here.
[iframe width=”400″ height=”300″ src=”http://player.vimeo.com/video/32983880″]
[iframe width=”400″ height=”300″ src=”http://player.vimeo.com/video/32915393″]
Last month I gave a presentation at the Annual Meeting of the Human Factors and Ergonomics Society held at Leeds University in the UK. I stood on the podium and presented the work, but really the people who deserve most of the credit are Marjolein van der Zwaag (from Philips Research Laboratories) and my own PhD student at LJMU Elena Spiridon.
You can watch a podcast of the talk above. This work was originally conducted as part of the REFLECT project at the end of 2010. This work was inspired by earlier research on affective computing where the system makes an adaptation to alleviate a negative mood state. The rationale here is that any such adaptation will have beneficial effects – in terms of reducing duration/intensity of negative mood, and in doing so, will mitigate any undesirable effects on behaviour or the health of the person.
Our study was concerned with the level of anger a person might experience on the road. We know that anger causes ‘load’ on the cardiovascular system as well as undesirable behaviours associated with aggressive driver. In our study, we subjected participants to a simulated driving task that was designed to make them angry – this is a protocol that we have developed at LJMU. Marjolein was interested in the effects of different types of music on the cardiovascular system while the person is experiencing a negative mood state; for our study, she created four categories of music that varied in terms of high/low activation and positive/negative valence.
The study does not represent an investigation into a physiological computing system per se, but is rather a validation study to explore whether an adaptation, such as selecting a certain type of music when a person is angry, can have beneficial effects. We’re working on a journal paper version at the moment.
[iframe width=”400″ height=”300″ src=”http://player.vimeo.com/video/25081038″]
Some months ago, I wrote this post about the REFLECT project that we participated in for the last three years. In short, the REFLECT project was concerned with research and development of three different kinds of biocybernetic loops: (1) detection of emotion, (2) diagnosis of mental workload, and (3) assessment of physical comfort. Psychophysiological measures were used to assess (1) and (2) whilst physical movement (fidgeting) in a seated position was used for the latter. And this was integrated into the ‘cockpit’ of a Ferrari.
The idea behind the emotional loop was to have the music change in response to emotion (to alleviate negative mood states). The cognitive loop would block incoming calls if the driver was in a state of high mental workload and air-filled bladders in the seat would adjust to promote physical comfort. You can read all about the project here. Above you’ll find a promotional video that I’ve only just discovered – the reason for my delayed response in posting this is probably vanity, the filming was over before I got to the Ferrari site in Maranello. The upside of my absence is that you can watch the much more articulate and handsome Dick de Waard explain about the cognitive loop in the film, which was our main involvement in the project.
This post represents some thoughts on the use of psychophysiology to evaluate the player experience during a computer game. As such, it’s tangential to the main business of this blog, but it’s a topic that I think is worth some discussion and debate, as it raises a whole bunch of pertinent issues for the design of physiological computer games.
Psychophysiological methods are combined with computer games in two types of context: applied psychology research and game evaluation in a commercial context. With respect to the former, a researcher may use a computer game as a platform to study a psychological concept, such as effects of game play on aggression or how playing against a friend or a stranger influences the experience of the player (see this recent issue of Entertainment Computing for examples). In both cases, we’re dealing with the application of an experimental psychology methodology to an issue where the game is used as a task or virtual world within which to study behaviour. The computer game merely represents an environment or context in which to study human behaviour. This approach is characterised by several features: (1) comparisons are made between carefully controlled conditions, (2) statistical power is important (if you want to see your work published) so large numbers of participants are run through the design, (3) selection of participants is carefully controlled (equal number of males and females, comparative age ranges if groups are compared) and (4) counterbalanced designs, i.e. if participants play 2 different games, half of them play game 1 then game 2 whilst the other half play game 2 and then game 1; this is important because the order in which games are presented often influences the response of the participants.