We Need To Talk About Clippy

Clippy
Everyone who used MS Office between 1997 and 2003 remembers Clippy.  He was a help avatar designed to interact with the user in a way that was both personable and predictive.  He was a friendly sales assistant combined with a butler who anticipated all your needs.  At least, that was the idea.  In reality, Clippy fell well short of those expectations, he was probably the most loathed feature of those particular operating systems; he even featured in this Time Magazine list of world’s worst inventions, a list that also includes Agent Orange and the Segway.

In an ideal world, Clippy would have responded to user behaviour in ways that were intuitive, timely and helpful.  In reality, his functionality was limited, his appearance often intrusive and his intuition was way off. Clippy irritated so completely that his legacy lives on over ten years later.  If you describe the concept of an intelligent adaptive interface to most people, half of them recall the dreadful experience of Clippy and the rest will probably be thinking about HAL from 2001: A Space Odyssey.  With those kinds of role models, it’s not difficult to understand why users are in no great hurry to embrace intelligent adaptation at the interface.

In the years since Clippy passed, the debate around machine intelligence has placed greater emphasis on the improvisational spark that is fundamental to displays of human intellect.  This recent article in MIT Technology Review makes the point that a “conversation” with Eugene Goostman (the chatter bot who won a Turing Test competition at Bletchley Park in 2012) lacks the natural “back and forth” of human-human communication.  Modern expectations of machine intelligence go beyond a simple imitation game within highly-structured rules, users are looking for a level of spontaneity and nuance that resonates with their human sense of what other people are.

But one of the biggest problems with Clippy was not simply intrusiveness but the fact that his repertoire of responses was very constrained, he could ask if you were writing a letter (remember those?) and precious little else.

Continue reading

Share This:

Can Physiological Computing Create Smart Technology?

smart

The phrase “smart technology” has been around for a long time.  We have smart phones and smart televisions with functional capability that is massively enhanced by internet connectivity.  We also talk about smart homes that scale up into smart cities.  This hybrid between technology and the built environment promotes connectivity but with an additional twist – smart spaces monitor activity within their confines for the purposes of intelligent adaptation: to switch off lighting and heating if a space is uninhabited, to direct music from room to room as the inhabitant wanders through the house.

If smart technology is equated with enhanced connectivity and functionality, do those things translate into an increase of machine intelligence?  In his 2007 book ‘The Design Of Future Things‘, Donald Norman defined the ‘smartness’ of technology with respect to the way in which it interacted with the human user.  Inspired by J.C.R. Licklider’s (1960) definition of man-computer symbiosis, he claimed that smart technology was characterised by a harmonious partnership between person and machine.  Hence, the ‘smartness’ of technology is defined by the way in which it responds to the user and vice versa.

One prerequisite for a relationship between person and machine that is  cooperative and compatible is to enhance the capacity of technology to monitor user behaviour.  Like any good butler, the machine needs to increase its  awareness and understanding of user behaviour and user needs.  The knowledge gained via this process can subsequently be deployed to create intelligent forms of software adaptation, i.e. machine-initiated responses that are both timely and intuitive from a human perspective.  This upgraded form of  human-computer interaction is attractive to technology providers and their customers, but is it realistic and achievable and what practical obstacles must be overcome?

Continue reading

Share This:

What’s The Deal With Brain-to-Brain Interfaces?

Untitled

When I first heard the term ‘brain-to-brain interfaces’, my knee-jerk response was – don’t we already have those?  Didn’t we used to call them people?  But sarcasm aside, it was clear that a new variety of BCI technology had arrived, complete with its own corporate acronym ‘B2B.’

For those new to the topic, brain-to-brain interfaces represent an amalgamation of two existing technologies.  Input is represented by volitional changes in the EEG activity of the ‘sender’ as would be the case for any type of ‘active’ BCI.  This signal is converted into an input signal for a robotised version of transcrannial magnetic stimulation (TMS) placed at a strategic location on the head of the ‘receiver.’

TMS works by discharging an electrical current in brief pulses via a stimulating coil.  These pulses create a magnetic field that induces an electrical current in the surface of the cortex that is sufficiently strong to induce neuronal depolarisation.  Because activity in the brain beneath the coil is directly modulated by this current, TMS is capable of inducing specific types of sensory phenomena or behaviour.  You can find an introduction to TMS here (it’s an old pdf but freely available).

A couple of papers were published in PLOS One at the end of last year describing two distinct types of brain-to-brain interface between humans.

Continue reading

Share This:

Forum for the Community for Passive BCI

A quick post to alert people to the first forum for the Community for Passive BCI Research that take place from the 16th to the 18th of July at the Hanse Institute for Advanced Study in Delmenhorst, near Bremen, Germany.  This event is being organised by Thorsten Zander from the Berlin Institute of Technology.

The main aim of the forum in his own words “is to connect researchers in this young field and to give them a platform to share their motivations and intentions. Therefore, the focus will not be primarily set on the presentation of new scientific results, but on the discussion of current and future directions and the possibilities to shape the community.”

Continue reading

Share This:

Book Announcement – Advances in Physiological Computing

It was way back in 2011 during our CHI workshop that we first discussed the possibility of putting together an edited collection for Springer on the topic of physiological computing.  It was clear to me at that time that many people associated physiological computing with implicit monitoring as opposed the active control that characterised BCI.  When we had the opportunity to put together a collection, one idea was to extend the scope of physiological computing to include all technologies where signals from the brain and the body were used as a form of input.  Some may interpret this relabelling of physiological computing as an all-inclusive strategy as a provocative move.  But we did not take this option as a conceptual ‘land-grab’ but rather an attempt to be as inclusive as possible and to bring together what I still perceive to be a rather disparate and fractured research community.  After all, we are all using psychophysiology in one form or another and share a common interest in sensor design, interaction mechanics and real-time measurement.

The resulting book is finally close to publication (tentative date: 4th April 2014) and you can follow this link to get the full details.  We’re pleased to have a wide range of contributions on an array of technologies, from eye input to digital memories via mental workload monitoring, implicit interaction, robotics, biofeedback and cultural heritage.  Thanks to all our contributors and the staff at Springer who helped us along the way.

 

Share This:

Reflections on first International Conference on Physiological Computing Systems

international-conference-on-physiological-computing-systems-phycs-logo

Last week I attended the first international conference on physiological computing held in Lisbon.  Before commenting on the conference, it should be noted that I was one of the program co-chairs, so I am not completely objective – but as this was something of a watershed event for research in this area, I didn’t want to let the conference pass without comment on the blog.

The conference lasted for two-and-a-half days and included four keynote speakers.  It was a relatively small meeting with respect to the number of delegates – but that is to be expected from a fledgling conference in an area that is somewhat niche with respect to methodology but very broad in terms of potential applications.

Continue reading

Share This:

What kind of Meaningful Interaction would you like to have? Pt 1

A couple of years ago we organised this CHI workshop on meaningful interaction in physiological computing.  As much as I felt this was an important area for investigation, I also found the topic very hard to get a handle on.  I recently revisited this problem in working on a co-authored book chapter with Kiel on our forthcoming collection for Springer entitled ‘Advances in Physiological Computing’ due out next May.

On reflection, much of my difficulty revolved around the complexity of defining meaningful interaction in context.  For systems like BCI or ocular control, where input control is the key function, the meaningfulness of the HCI is self-evident.  If I want an avatar to move forward, I expect my BCI to translate that intention into analogous action at the interface.   But biocybernetic systems, where spontaneous psychophysiology is monitored, analysed and classified, are a different story.  The goal of this system is to adapt in a timely and appropriate fashion and evaluating the literal meaning of that kind of interaction is complex for a host of reasons.

Continue reading

Share This:

The Epoc and Your Next Job Interview

job-interview

Imagine you are waiting to be interviewed for a job that you really want.  You’d probably be nervous, fingers drumming the table, eyes restlessly staring around the room.  The door opens and a man appears, he is wearing a lab coat and he is holding an EEG headset in both hands.  He places the set on your head and says “Your interview starts now.”

This Philip K Dick scenario became reality for intern applicants at the offices of TBWA who are an advertising firm based in Istanbul.  And thankfully a camera was present to capture this WTF moment for each candidate so this video could be uploaded to Vimeo.

The rationale for the exercise is quite clear.  The company want to appoint people who are passionate about advertising, so working with a consultancy, they devised a test where candidates watch a series of acclaimed ads and the Epoc is used to measure their levels of ‘passion’ ‘love’ and ‘excitement’ in a scientific and numeric way.  Those who exhibit the greatest passion for adverts get the job (this is the narrative of the movie; in reality one suspects/hopes they were interviewed as well).

I’ve seen at least one other blog post that expressed some reservations about the process.

Let’s take a deep breath because I have a whole shopping list of issues with this exercise.

Continue reading

Share This:

Redundancy, Enhancement and the Purpose of Physiological Computing

glass_eeg

There has been a lot of tweets and blogs devoted to an article written recently by Don Norman for the MIT Technology Review on wearable computing.  The original article is here, but in summary, Norman points to an underlying paradox surrounding Google Glass etc.  In the first instance, these technological artifacts are designed to enhance human abilities (allowing us to email on the move, navigate etc.), however, because of inherent limitations on the human information processing system, they have significant potential to degrade aspects of human performance.  Think about browsing Amazon on your glasses whilst crossing a busy street and you get the idea.

The paragraph in Norman’s article that caught my attention and is most relevant to this blog is this one.

Eventually we will be able to eavesdrop on both our own internal states and those of others. Tiny sensors and clever software will infer their emotional and mental states and our own. Worse, the inferences will often be wrong: a person’s pulse rate just went up, or their skin conductance just changed; there are many factors that could cause such things to happen, but technologists are apt to focus upon a simple, single interpretation.”

Continue reading

Share This:

Comfort and Comparative Performance of the Emotiv EPOC

emotiv-headset

I’ve written a couple of posts about the Emotiv EPOC over the years of doing the blog, from user interface issues in this post and the uncertainties surrounding the device for customers and researchers here.

The good news is that research is starting to emerge where the EPOC has been systematically compared to other devices and perhaps some uncertainties can be resolved. The first study comes from the journal Ergonomics from Ekandem et al and was published in 2012. You can read an abstract here (apologies to those without a university account who can’t get behind the paywall). These authors performed an ergonomic evaluation of both the EPOC and the NeuroSky MindWave. Data was obtained from 11 participants, each of whom wore either a Neurosky or an EPOC for 15min on different days. They concluded that there was no clear ‘winner’ from the comparison. The EPOC has 14 sites compared to the single site used by the MindWave hence it took longer to set up and required more cleaning afterwards (and more consumables). No big surprises there. It follows that signal acquisition was easier with the MindWave but the authors report that once the EPOC was connected and calibrated, signal quality was more consistent than the MindWave despite sensor placement for the former being obstructed by hair.

Continue reading

Share This: