Designing for the gullible

There’s a nice article in todays Guardian by Charles Arthur regarding user gullibility in the face of technological systems.  In this case, he’s talking about the voice risk analysis (VRA) software used by local councils and insurance companies to detect fraud (see related article by same author), which performs fairly poorly when evaluated, but is reckoned by those bureaucrats who purchased the system to be a huge money-saver.  The way it works is this – operator receives a probability that the claimant is lying (based on “brain traces in the voice” – in reality probably changes in the fundamental frequency and pitch of the voice), and on this basis,  may elect to ask more detailed questions.

Charles Arthur makes the point that we’re naive and gullible when faced with a technological diagnosis.  And this is fair point, whether it’s the voice analysis system or a physiological computing system providing feedback that you’re happy or tired or anxious.  Why do we tend to yield to computerised diagnosis?  In my view, you can blame science for that – in our positivist culture, cold objective numbers will always trump warm subjective introspection.  The first experimental psychologist, Wilhem Wundt (1832-1920) pointed to this dichotomy when he distinguished between mediated and unmediated consciousness.  The latter is linked to introspection whereas the former demands the intervention of an instrument or technology.  If you go outside on an icy day and say to yourself “it’s cold today” – your consciousness is unmediated.  If you supplement this insight by reading a thermometer “wow, two degrees below zero”  – that’s mediated consciousness.  One is broadly true from that person’s perspective whereas the other is precise from point of view of almost anyone.

The main point of today’s article is that we tend to trust technological diagnosis even when the scientific evidence supporting system performance is flawed (as is claimed in the case of the VRA system).  Again, true enough – but in fairness, most users of the VRA didn’t get the chance to review the system evaluation data.  The staff are trained to believe the system by the company rep who sold the system and trained them how to use it.  From the perspective of the customers, insurance staff may have suddenly started to ask them a lot of detailed questions, which indicated their stories were not believed, which probably made the customers agitated and anxious, therefore raising the pitch of the voice and turning themselves from possibles to definites.  The VRA system works very well in this context because nobody really knew how it worked or even whether it worked.

What does all this mean for physiological computing?  First of all, system designers and users must accept that psychophysiological measurement will never give a perfect, isomorphic, one-to-one model of human experience.  The system builds a model of the user state, not a perfect representation.  Given this restriction, system designers must be clever in terms of providing feedback to the user.  Explicit and continuous feedback from the system is likely to undermine the credibility of the system in the eyes of the user.  Users of physiological computing systems must be sufficiently informed to understand that feedback from the system is an educated assessment.

The construction of physiological computing systems is a bridge-building exercise in some ways – a link between the nervous system and the computer chip.  Unlike similar constructions, this bridge is unlikely to ever meet in the middle.  For that to happen, the user must rely his or her gullibility to make the necessary leap of faith to close the circuit.  Unrealistic expectation will lead to eventual disappointment and disillusionment, conservative cynicism and suspicious will leave the whole physiological computing concept stranded at the starting gate – it’s up to designers to build interfaces that lead the user down the middle path.

Share This: