If there are two truisms in the area of physiological computing, they are: (1) people will always produce physiological data and (2) these data are continuously available. The passive nature of physiological monitoring and the relatively high fidelity of data that can be obtained is one reason why we’re seeing physiology and psychophysiology as candidates for Big Data collection and analysis (see my last post on the same theme). It is easy to see the appeal of physiological data in this context, to borrow a quote from Jaron Lanier’s new book “information is people in disguise” and we all have the possibility of gaining insight from the data we generate as we move through the world.
If I collect physiological data about myself, as Kiel did during the bodyblogger project, it is clear that I own that data. After all, the original ECG was generated by me and I went to the trouble of populating a database for personal use, so I don’t just own the data, I own a particular representation of the data. But if I granted a large company or government access to my data stream, who would own the data?
I attended a short conference event organised by the CEEDs project earlier this month entitled “Making Sense of Big Data.” CEEDS is an EU-funded project under the Future and Emerging Technology (FET) Initiative. The project is concerned with the development of novel technologies to support human experience. The event took place at the Google Campus in London and included a range of speakers talking about the use of data to capture human experience and behaviour. You can find a link about the event here that contains full details and films of all the talks including a panel discussion. My own talk was a general introduction to physiological computing and a statement of our latest project work.
It was a thought-provoking day because it was an opportunity to view the area of physiological computing from a different perspective. The main theme being that we are entering the age of ‘big data’ in the sense that passive monitoring of people using mobile technology grants access to a wide array of data concerning human behaviour. Of course this is hugely relevant to physiological monitoring systems, which tend towards high-resolution data capture and may represent the richest vein of big data to index the human experience.
If there is a problem for academics working in the area of physiological computing, it can sometimes be a problem finding the right place to publish. By the right place, I mean a forum that is receptive to multidisciplinary research and where you feel confident that you can reach the right audience. Having done a lot of reviewing of physiological computing papers, I see work that is often strong on measures/methodology but weak on applications; alternatively papers tend to focus on interaction mechanics but are sometimes poor on the measurement side. The main problem lies with the expertise of the reviewer or reviewers, who often tend to be psychologists or computer scientists and it can be difficult for authors to strike the right balance.
For this reason, I’m writing to make people aware of The First International Conference on Physiological Computing to be held in Lisbon next January. The deadline for papers is 30th July 2013. A selected number of papers will be published by Springer-Verlag as part of their series of Lecture Notes in Computer Science. The journal Multimedia Tools & Applications (also published by Springer) will also select papers presented at the conference to form a special issue. There is also a special issue of the journal Transactions in Computer-Human Interaction (TOCHI) on physiological computing that is currently open for submissions, the cfp is here and the deadline is 20th December 2013.
I should also plug a new journal from Inderscience called the International Journal of Cognitive Performance Support which has just published its first edition and would welcome contributions on brain-computer interfaces and biofeedback mechanics.
First of all, apologies for our blog “sabbatical” – the important thing is that we are now back with news of our latest research collaboration involving FACT (Foundation for Art and Creative Technology) and international artists’ collective Manifest.AR.
To quickly recap, our colleagues at FACT were keen to create a new commission tapping into the use of augmented reality technology and incorporating elements of our own work on physiological computing. Our last post (almost a year ago now to our shame) described the time we spent with Manfest.AR last summer and our show-and-tell event at FACT. Fast-forward to the present and the Manifest.AR piece called Invisible ARtaffects opened last Thursday as part of the Turning FACT Inside Out show.
I am one of the organisers for a workshop event at ICMI 2012 entitled “BCI Grand Challenges.” The deadline for submissions was this coming Friday (15th) but has now been extended until the 30th June. Full details are below.
With regards to the development of physiological computing systems, whether they are BCI applications or fall into the category of affective computing, there seems (to me) to be two distinct types of research community at work. The first (and oldest) community are university-based academics, like myself, doing basic research on measures, methods and prototypes with the primary aim of publishing our work in various conferences and journals. For the most part, we are a mixture of psychologists, computer scientists and engineers, many of whom have an interest in human-computer interaction. The second community formed around the availability of commercial EEG peripherals, such as the Emotiv and Neurosky. Some members of this community are academics and others are developers, I suspect many are dedicated gamers. They are looking to build applications and hacks to embellish interactive experience with a strong emphasis on commercialisation.
There are many differences between the two groups. My own academic group is ‘old-school’ in many ways, motivated by research issues and defined by the usual hierarchies associated with specialisation and rank. The newer group is more inclusive (the tag-line on the NeuroSky site is “Brain Sensors for Everyone”); they basically want to build stuff and preferably sell it.
Way back in February, Kiel and I did an event called Body Lab in conjunction with our LJMU colleagues at OpenLabs. The idea for this event originated in a series of conversations between ourselves and OpenLabs about our mutual interest in digital health. The brief of OpenLabs is to “support local creative technology companies to develop new products and services that capitalise upon global opportunities.” Their interest in our work on physiological computing was to put this idea out among their community of local creatives and digital types.
I was initially apprehensive about wisdom of this event. I’m quite used to talking about our work with others from the research community, from both the commercial and academic side – what makes me slightly uncomfortable is talking about possible implementations because I feel the available sensor apparatus and other tools are not so advanced. I was also concerned about whether doing a day-long event on this topic would pull in a sufficient number of participants – what we do has always felt very “niche” in my view. Anyhow, some smooth-talking from Jason Taylor (our OpenLabs contact) and a little publicity in the form of this short podcast convinced that we should give it our best shot.
Way back in 2008, I was due to go to Florence to present at a workshop on affective BCI as part of CHI. In the event, I was ill that morning and missed the trip and the workshop. As I’d prepared the presentation, I made a podcast for sharing with the workshop attendees. I dug it out of the vaults for this post because gaming and physiological computing is such an interesting topic.
The work is dated now, but basically I’m drawing a distinction between my understanding of BCI and biocybernetic adaptation. The former is an alternative means of input control within the HCI, the latter can be used to adapt the nature of the HCI. I also argue that BCI is ideally suited certain types of game mechanics because it will not work 100% of the time. I used the TV series “Heroes” to illustrate these kinds of mechanics, which I regret in hindsight, because I totally lost all enthusiasm for that show after series 1.
The original CHI paper for this presentation is available here.
[iframe width=”400″ height=”300″ src=”http://player.vimeo.com/video/32983880″]
[iframe width=”400″ height=”300″ src=”http://player.vimeo.com/video/32915393″]
Last month I gave a presentation at the Annual Meeting of the Human Factors and Ergonomics Society held at Leeds University in the UK. I stood on the podium and presented the work, but really the people who deserve most of the credit are Marjolein van der Zwaag (from Philips Research Laboratories) and my own PhD student at LJMU Elena Spiridon.
You can watch a podcast of the talk above. This work was originally conducted as part of the REFLECT project at the end of 2010. This work was inspired by earlier research on affective computing where the system makes an adaptation to alleviate a negative mood state. The rationale here is that any such adaptation will have beneficial effects – in terms of reducing duration/intensity of negative mood, and in doing so, will mitigate any undesirable effects on behaviour or the health of the person.
Our study was concerned with the level of anger a person might experience on the road. We know that anger causes ‘load’ on the cardiovascular system as well as undesirable behaviours associated with aggressive driver. In our study, we subjected participants to a simulated driving task that was designed to make them angry – this is a protocol that we have developed at LJMU. Marjolein was interested in the effects of different types of music on the cardiovascular system while the person is experiencing a negative mood state; for our study, she created four categories of music that varied in terms of high/low activation and positive/negative valence.
The study does not represent an investigation into a physiological computing system per se, but is rather a validation study to explore whether an adaptation, such as selecting a certain type of music when a person is angry, can have beneficial effects. We’re working on a journal paper version at the moment.
[iframe width=”400″ height=”300″ src=”http://player.vimeo.com/video/25081038″]
Some months ago, I wrote this post about the REFLECT project that we participated in for the last three years. In short, the REFLECT project was concerned with research and development of three different kinds of biocybernetic loops: (1) detection of emotion, (2) diagnosis of mental workload, and (3) assessment of physical comfort. Psychophysiological measures were used to assess (1) and (2) whilst physical movement (fidgeting) in a seated position was used for the latter. And this was integrated into the ‘cockpit’ of a Ferrari.
The idea behind the emotional loop was to have the music change in response to emotion (to alleviate negative mood states). The cognitive loop would block incoming calls if the driver was in a state of high mental workload and air-filled bladders in the seat would adjust to promote physical comfort. You can read all about the project here. Above you’ll find a promotional video that I’ve only just discovered – the reason for my delayed response in posting this is probably vanity, the filming was over before I got to the Ferrari site in Maranello. The upside of my absence is that you can watch the much more articulate and handsome Dick de Waard explain about the cognitive loop in the film, which was our main involvement in the project.