Tag Archives: affective computing

We Need To Talk About Clippy

Clippy
Everyone who used MS Office between 1997 and 2003 remembers Clippy.  He was a help avatar designed to interact with the user in a way that was both personable and predictive.  He was a friendly sales assistant combined with a butler who anticipated all your needs.  At least, that was the idea.  In reality, Clippy fell well short of those expectations, he was probably the most loathed feature of those particular operating systems; he even featured in this Time Magazine list of world’s worst inventions, a list that also includes Agent Orange and the Segway.

In an ideal world, Clippy would have responded to user behaviour in ways that were intuitive, timely and helpful.  In reality, his functionality was limited, his appearance often intrusive and his intuition was way off. Clippy irritated so completely that his legacy lives on over ten years later.  If you describe the concept of an intelligent adaptive interface to most people, half of them recall the dreadful experience of Clippy and the rest will probably be thinking about HAL from 2001: A Space Odyssey.  With those kinds of role models, it’s not difficult to understand why users are in no great hurry to embrace intelligent adaptation at the interface.

In the years since Clippy passed, the debate around machine intelligence has placed greater emphasis on the improvisational spark that is fundamental to displays of human intellect.  This recent article in MIT Technology Review makes the point that a “conversation” with Eugene Goostman (the chatter bot who won a Turing Test competition at Bletchley Park in 2012) lacks the natural “back and forth” of human-human communication.  Modern expectations of machine intelligence go beyond a simple imitation game within highly-structured rules, users are looking for a level of spontaneity and nuance that resonates with their human sense of what other people are.

But one of the biggest problems with Clippy was not simply intrusiveness but the fact that his repertoire of responses was very constrained, he could ask if you were writing a letter (remember those?) and precious little else.

Continue reading

Share This:

Reflections on first International Conference on Physiological Computing Systems

international-conference-on-physiological-computing-systems-phycs-logo

Last week I attended the first international conference on physiological computing held in Lisbon.  Before commenting on the conference, it should be noted that I was one of the program co-chairs, so I am not completely objective – but as this was something of a watershed event for research in this area, I didn’t want to let the conference pass without comment on the blog.

The conference lasted for two-and-a-half days and included four keynote speakers.  It was a relatively small meeting with respect to the number of delegates – but that is to be expected from a fledgling conference in an area that is somewhat niche with respect to methodology but very broad in terms of potential applications.

Continue reading

Share This:

The Epoc and Your Next Job Interview

job-interview

Imagine you are waiting to be interviewed for a job that you really want.  You’d probably be nervous, fingers drumming the table, eyes restlessly staring around the room.  The door opens and a man appears, he is wearing a lab coat and he is holding an EEG headset in both hands.  He places the set on your head and says “Your interview starts now.”

This Philip K Dick scenario became reality for intern applicants at the offices of TBWA who are an advertising firm based in Istanbul.  And thankfully a camera was present to capture this WTF moment for each candidate so this video could be uploaded to Vimeo.

The rationale for the exercise is quite clear.  The company want to appoint people who are passionate about advertising, so working with a consultancy, they devised a test where candidates watch a series of acclaimed ads and the Epoc is used to measure their levels of ‘passion’ ‘love’ and ‘excitement’ in a scientific and numeric way.  Those who exhibit the greatest passion for adverts get the job (this is the narrative of the movie; in reality one suspects/hopes they were interviewed as well).

I’ve seen at least one other blog post that expressed some reservations about the process.

Let’s take a deep breath because I have a whole shopping list of issues with this exercise.

Continue reading

Share This:

Redundancy, Enhancement and the Purpose of Physiological Computing

glass_eeg

There has been a lot of tweets and blogs devoted to an article written recently by Don Norman for the MIT Technology Review on wearable computing.  The original article is here, but in summary, Norman points to an underlying paradox surrounding Google Glass etc.  In the first instance, these technological artifacts are designed to enhance human abilities (allowing us to email on the move, navigate etc.), however, because of inherent limitations on the human information processing system, they have significant potential to degrade aspects of human performance.  Think about browsing Amazon on your glasses whilst crossing a busy street and you get the idea.

The paragraph in Norman’s article that caught my attention and is most relevant to this blog is this one.

Eventually we will be able to eavesdrop on both our own internal states and those of others. Tiny sensors and clever software will infer their emotional and mental states and our own. Worse, the inferences will often be wrong: a person’s pulse rate just went up, or their skin conductance just changed; there are many factors that could cause such things to happen, but technologists are apt to focus upon a simple, single interpretation.”

Continue reading

Share This:

Data Trading, Body Snooping & Insight from Physiological Data

cardiac-monitor

 

If there are two truisms in the area of physiological computing, they are: (1) people will always produce physiological data and (2) these data are continuously available.  The passive nature of physiological monitoring and the relatively high fidelity of data that can be obtained is one reason why we’re seeing physiology and psychophysiology as candidates for Big Data collection and analysis (see my last post on the same theme).  It is easy to see the appeal of physiological data in this context, to borrow a quote from Jaron Lanier’s new book “information is people in disguise” and we all have the possibility of gaining insight from the data we generate as we move through the world.

If I collect physiological data about myself, as Kiel did during the bodyblogger project, it is clear that I own that data.  After all, the original ECG was generated by me and I went to the trouble of populating a database for personal use, so I don’t just own the data, I own a particular representation of the data.  But if I granted a large company or government access to my data stream, who would own the data?

Continue reading

Share This:

Troubleshooting and Mind-Reading: Developing EEG-based interaction with commercial systems

With regards to the development of physiological computing systems, whether they are BCI applications or fall into the category of affective computing, there seems (to me) to be two distinct types of research community at work. The first (and oldest) community are university-based academics, like myself, doing basic research on measures, methods and prototypes with the primary aim of publishing our work in various conferences and journals. For the most part, we are a mixture of psychologists, computer scientists and engineers, many of whom have an interest in human-computer interaction. The second community formed around the availability of commercial EEG peripherals, such as the Emotiv and Neurosky. Some members of this community are academics and others are developers, I suspect many are dedicated gamers. They are looking to build applications and hacks to embellish interactive experience with a strong emphasis on commercialisation.

There are many differences between the two groups. My own academic group is ‘old-school’ in many ways, motivated by research issues and defined by the usual hierarchies associated with specialisation and rank. The newer group is more inclusive (the tag-line on the NeuroSky site is “Brain Sensors for Everyone”); they basically want to build stuff and preferably sell it.

Continue reading

Share This:

Mood and Music: effects of music on driver anger

[iframe width=”400″ height=”300″ src=”http://player.vimeo.com/video/32915393″]

Last month I gave a presentation at the Annual Meeting of the Human Factors and Ergonomics Society held at Leeds University in the UK.  I stood on the podium and presented the work, but really the people who deserve most of the credit are Marjolein van der Zwaag (from Philips Research Laboratories) and my own PhD student at LJMU Elena Spiridon.

You can watch a podcast of the talk above.  This work was originally conducted as part of the REFLECT project at the end of 2010.  This work was inspired by earlier research on affective computing where the system makes an adaptation to alleviate a negative mood state.  The rationale here is that any such adaptation will have beneficial effects – in terms of reducing duration/intensity of negative mood, and in doing so, will mitigate any undesirable effects on behaviour or the health of the person.

Our study was concerned with the level of anger a person might experience on the road.  We know that anger causes ‘load’ on the cardiovascular system as well as undesirable behaviours associated with aggressive driver.  In our study, we subjected participants to a simulated driving task that was designed to make them angry – this is a protocol that we have developed at LJMU.  Marjolein was interested in the effects of different types of music on the cardiovascular system while the person is experiencing a negative mood state; for our study, she created four categories of music that varied in terms of high/low activation and positive/negative valence.

The study does not represent an investigation into a physiological computing system per se, but is rather a validation study to explore whether an adaptation, such as selecting a certain type of music when a person is angry, can have beneficial effects.  We’re working on a journal paper version at the moment.

Share This:

Physiological Computing, Challenges for Developers and Users.

I recently received a questionnaire from the European Parliament, or rather  its STOA panel with respect to developments in physiological computing and implications for social policy.  The European Technology Assessment Group (ETAG) is working on a study with the title “Making Perfect Life” which includes a section on biocybernetic adaptation as well as BCI as other kinds of “assistive” technology.  The accompanying email told me the questionnaire would take half-an-hour to complete (it didn’t) but they asked some interesting questions, particularly surrounding the view of the general public about this technology and issues surrounding data protection.

I’ve included a slightly-edited version of the questionnaire with my responses. Questions are in italics.
Continue reading

Share This:

CFP – 2nd Workshop on Affective Brain-Computer Interfaces (aBCI)

Workshop at ACII 2011

http://hmi.ewi.utwente.nl/abci2011

http://www.acii2011.org

The second workshop on affective brain-computer interfaces will explore the advantages and limitations of using neuro-physiological signals as a modality for the automatic recognition of affective and cognitive states, and the possibilities of using this information about the user state in innovative and adaptive applications. The goal is to bring researchers from the communities of brain computer interfacing, affective computing, neuro-ergonomics, affective and cognitive neuroscience together to present state-of-the-art progress and visions on the various overlaps between those disciplines.

Continue reading

Share This:

Road rage, unhealthy emotions and affective computing

From the point of view of an outsider, the utility and value of computer technology that provides emotional feedback to the human operator is questionable.  The basic argument normally goes like this: even if the technology works, do I really need a machine to tell me that I’m happy or angry or calm or anxious or excited?  First of all, the feedback provided by this machine would be redundant, I already have a mind/body that keeps me fully appraised of my emotional status – thank you.  Secondly, if I’m angry or frustrated, do you really think I would helped in any way by a machine that drew my attention to these negative emotions, actually that would be particularly annoying.  Finally, sometimes I’m not quite sure how I’m feeling or how I feel about something; feedback from a machine that says you’re happy or angry would just muddy the waters and add further confusion.

Continue reading

Share This: