There has been a lot of tweets and blogs devoted to an article written recently by Don Norman for the MIT Technology Review on wearable computing. The original article is here, but in summary, Norman points to an underlying paradox surrounding Google Glass etc. In the first instance, these technological artifacts are designed to enhance human abilities (allowing us to email on the move, navigate etc.), however, because of inherent limitations on the human information processing system, they have significant potential to degrade aspects of human performance. Think about browsing Amazon on your glasses whilst crossing a busy street and you get the idea.
The paragraph in Norman’s article that caught my attention and is most relevant to this blog is this one.
“Eventually we will be able to eavesdrop on both our own internal states and those of others. Tiny sensors and clever software will infer their emotional and mental states and our own. Worse, the inferences will often be wrong: a person’s pulse rate just went up, or their skin conductance just changed; there are many factors that could cause such things to happen, but technologists are apt to focus upon a simple, single interpretation.”
I’ve written a couple of posts about the Emotiv EPOC over the years of doing the blog, from user interface issues in this post and the uncertainties surrounding the device for customers and researchers here.
The good news is that research is starting to emerge where the EPOC has been systematically compared to other devices and perhaps some uncertainties can be resolved. The first study comes from the journal Ergonomics from Ekandem et al and was published in 2012. You can read an abstract here (apologies to those without a university account who can’t get behind the paywall). These authors performed an ergonomic evaluation of both the EPOC and the NeuroSky MindWave. Data was obtained from 11 participants, each of whom wore either a Neurosky or an EPOC for 15min on different days. They concluded that there was no clear ‘winner’ from the comparison. The EPOC has 14 sites compared to the single site used by the MindWave hence it took longer to set up and required more cleaning afterwards (and more consumables). No big surprises there. It follows that signal acquisition was easier with the MindWave but the authors report that once the EPOC was connected and calibrated, signal quality was more consistent than the MindWave despite sensor placement for the former being obstructed by hair.
If there are two truisms in the area of physiological computing, they are: (1) people will always produce physiological data and (2) these data are continuously available. The passive nature of physiological monitoring and the relatively high fidelity of data that can be obtained is one reason why we’re seeing physiology and psychophysiology as candidates for Big Data collection and analysis (see my last post on the same theme). It is easy to see the appeal of physiological data in this context, to borrow a quote from Jaron Lanier’s new book “information is people in disguise” and we all have the possibility of gaining insight from the data we generate as we move through the world.
If I collect physiological data about myself, as Kiel did during the bodyblogger project, it is clear that I own that data. After all, the original ECG was generated by me and I went to the trouble of populating a database for personal use, so I don’t just own the data, I own a particular representation of the data. But if I granted a large company or government access to my data stream, who would own the data?
I attended a short conference event organised by the CEEDs project earlier this month entitled “Making Sense of Big Data.” CEEDS is an EU-funded project under the Future and Emerging Technology (FET) Initiative. The project is concerned with the development of novel technologies to support human experience. The event took place at the Google Campus in London and included a range of speakers talking about the use of data to capture human experience and behaviour. You can find a link about the event here that contains full details and films of all the talks including a panel discussion. My own talk was a general introduction to physiological computing and a statement of our latest project work.
It was a thought-provoking day because it was an opportunity to view the area of physiological computing from a different perspective. The main theme being that we are entering the age of ‘big data’ in the sense that passive monitoring of people using mobile technology grants access to a wide array of data concerning human behaviour. Of course this is hugely relevant to physiological monitoring systems, which tend towards high-resolution data capture and may represent the richest vein of big data to index the human experience.
If there is a problem for academics working in the area of physiological computing, it can sometimes be a problem finding the right place to publish. By the right place, I mean a forum that is receptive to multidisciplinary research and where you feel confident that you can reach the right audience. Having done a lot of reviewing of physiological computing papers, I see work that is often strong on measures/methodology but weak on applications; alternatively papers tend to focus on interaction mechanics but are sometimes poor on the measurement side. The main problem lies with the expertise of the reviewer or reviewers, who often tend to be psychologists or computer scientists and it can be difficult for authors to strike the right balance.
For this reason, I’m writing to make people aware of The First International Conference on Physiological Computing to be held in Lisbon next January. The deadline for papers is 30th July 2013. A selected number of papers will be published by Springer-Verlag as part of their series of Lecture Notes in Computer Science. The journal Multimedia Tools & Applications (also published by Springer) will also select papers presented at the conference to form a special issue. There is also a special issue of the journal Transactions in Computer-Human Interaction (TOCHI) on physiological computing that is currently open for submissions, the cfp is here and the deadline is 20th December 2013.
I should also plug a new journal from Inderscience called the International Journal of Cognitive Performance Support which has just published its first edition and would welcome contributions on brain-computer interfaces and biofeedback mechanics.
First of all, apologies for our blog “sabbatical” – the important thing is that we are now back with news of our latest research collaboration involving FACT (Foundation for Art and Creative Technology) and international artists’ collective Manifest.AR.
To quickly recap, our colleagues at FACT were keen to create a new commission tapping into the use of augmented reality technology and incorporating elements of our own work on physiological computing. Our last post (almost a year ago now to our shame) described the time we spent with Manfest.AR last summer and our show-and-tell event at FACT. Fast-forward to the present and the Manifest.AR piece called Invisible ARtaffects opened last Thursday as part of the Turning FACT Inside Out show.
I am one of the organisers for a workshop event at ICMI 2012 entitled “BCI Grand Challenges.” The deadline for submissions was this coming Friday (15th) but has now been extended until the 30th June. Full details are below.
With regards to the development of physiological computing systems, whether they are BCI applications or fall into the category of affective computing, there seems (to me) to be two distinct types of research community at work. The first (and oldest) community are university-based academics, like myself, doing basic research on measures, methods and prototypes with the primary aim of publishing our work in various conferences and journals. For the most part, we are a mixture of psychologists, computer scientists and engineers, many of whom have an interest in human-computer interaction. The second community formed around the availability of commercial EEG peripherals, such as the Emotiv and Neurosky. Some members of this community are academics and others are developers, I suspect many are dedicated gamers. They are looking to build applications and hacks to embellish interactive experience with a strong emphasis on commercialisation.
There are many differences between the two groups. My own academic group is ‘old-school’ in many ways, motivated by research issues and defined by the usual hierarchies associated with specialisation and rank. The newer group is more inclusive (the tag-line on the NeuroSky site is “Brain Sensors for Everyone”); they basically want to build stuff and preferably sell it.
Way back in February, Kiel and I did an event called Body Lab in conjunction with our LJMU colleagues at OpenLabs. The idea for this event originated in a series of conversations between ourselves and OpenLabs about our mutual interest in digital health. The brief of OpenLabs is to “support local creative technology companies to develop new products and services that capitalise upon global opportunities.” Their interest in our work on physiological computing was to put this idea out among their community of local creatives and digital types.
I was initially apprehensive about wisdom of this event. I’m quite used to talking about our work with others from the research community, from both the commercial and academic side – what makes me slightly uncomfortable is talking about possible implementations because I feel the available sensor apparatus and other tools are not so advanced. I was also concerned about whether doing a day-long event on this topic would pull in a sufficient number of participants – what we do has always felt very “niche” in my view. Anyhow, some smooth-talking from Jason Taylor (our OpenLabs contact) and a little publicity in the form of this short podcast convinced that we should give it our best shot.
Way back in 2008, I was due to go to Florence to present at a workshop on affective BCI as part of CHI. In the event, I was ill that morning and missed the trip and the workshop. As I’d prepared the presentation, I made a podcast for sharing with the workshop attendees. I dug it out of the vaults for this post because gaming and physiological computing is such an interesting topic.
The work is dated now, but basically I’m drawing a distinction between my understanding of BCI and biocybernetic adaptation. The former is an alternative means of input control within the HCI, the latter can be used to adapt the nature of the HCI. I also argue that BCI is ideally suited certain types of game mechanics because it will not work 100% of the time. I used the TV series “Heroes” to illustrate these kinds of mechanics, which I regret in hindsight, because I totally lost all enthusiasm for that show after series 1.
The original CHI paper for this presentation is available here.
[iframe width=”400″ height=”300″ src=”http://player.vimeo.com/video/32983880″]