I recently received a questionnaire from the European Parliament, or rather its STOA panel with respect to developments in physiological computing and implications for social policy. The European Technology Assessment Group (ETAG) is working on a study with the title “Making Perfect Life” which includes a section on biocybernetic adaptation as well as BCI as other kinds of “assistive” technology. The accompanying email told me the questionnaire would take half-an-hour to complete (it didn’t) but they asked some interesting questions, particularly surrounding the view of the general public about this technology and issues surrounding data protection.
I’ve included a slightly-edited version of the questionnaire with my responses. Questions are in italics.
Section 1: State of the art and future directions
“Biocybernetically adaptive artefacts” use data about the changing affective, physiological or neurophysiological state of the user in order to change their own functionality and/or appearance accordingly. Would you say that your research is concerned with “biocybernetic adaptation”?
Mainly, among others.
“There is a noticeable excitement among computer scientists, especially in the HCI community with respect to physiological computing (e.g. sections on “Brain-Computer Interfaces for HCI” at CHI-2008 and “Brain and Body Interfaces” at CHI-2011). In this vein Fairclough (2009) states “Physiological computing has the potential to provide a new paradigm for HCI by allowing a computer system to develop and access a dynamic representation of the cognitions, emotions and motivation of the user”. Are you inclined to share these high hopes and to acknowledge the great potential of these technologies, or would you consider them first of all as hype phenomena?
Please explain your point of view:”
As the author of the quote above, let me defend my position against any accusations of hype. If we want our computer systems to be smarter, more autonomous or demonstrate higher levels of “intelligence”, we need to enhance the sensitivity of technology to the context of the user, in order to do that, computers require a representation of the state or behavioural context of the user. Physiological computing represents a means of implicitly monitoring the psychological state of the user in order to inform computer adaptation and automation. This is a new paradigm for human-computer interaction because there is a real dialogue or two-way exchange of information between user and system. Thus, the system responds to changes in user state, which is subsequently altered by events at the interface and so on. The net effect of this reflexive interaction is that computers will no longer respond in a deterministic and totally predictable fashion, instead the behaviour of technology at the interface will be more probablistic in nature – this would be a huge paradigm change. In addition, as events at the interface respond to a machine classification of user state, the system functions as a ‘mirror’ for the user with implications for psychological self-awareness and self-regulation.
“Biocybernetic adaptation is sometimes described by its proponents as a means of achieving greater symmetry between humans and machines, because both human and computer are then able to enter a kind of dialogue and to get information about the status of the other. Is symmetrical HCI a robust and useful guiding vision for further development of HCI research?
Please explain your point of view:”
My answer to this question is ‘yes’. As technology becomes more pervasive and powerful (both in terms of autonomy and the capability to deliver information), we need a new model for HCI where IT works less as a deaf and blind slave system and more like a team player equipped with a tacit awareness of how best to serve the dynamic needs of the user. The benefits of this development are systems that are ‘smarter’ with respect to how/when they deliver information or systems that actively attempt to enhance safety and wellbeing. However, these potential enhancements will come at a price – symmetrical HCI means that users must engage in an more intimate relationship with technology (in terms of sharing information about private or personal experiences) and that computer systems will exercise a greater level of autonomy.
“What will be the single major application area where biocybernetic adaptation will be first applied on a large scale?
Please explain your choice and the underlying consideration:”
At the time of writing, the vast majority of sensors for physiological measurement still tend to be intrusive and generally not very comfortable for long-term use. Therefore, users who can perceive specific benefits will be the early adopters of this kind of technology. The two application areas where users can see definite benefits will be computer games, where physiological computing can add new modes of control and enhanced gaming dynamics, and telemedicine, where the availability of sensor apparatus allows both patients and medical professionals to monitor health status.
“What are the most relevant application fields where biocybernetically adaptive artefacts are already in use or will be in use in the (not too far) future? (healthcare, education and learning, entertainment, safety critical systems, driver assistance systems etc.?).
I see a great future for biocybernetically adaptive systems in (please check) :
healthcare, education and learning, entertainment & games, safety critical systems, assistance systems (e.g. driver assistance), others
Biocybernetically adaptive systems will remain a marginal phenomenon in (please check) :
healthcare, education and learning, entertainment & games, safety critical systems, assistance systems (e.g. driver assistance), others
Please comment your choices:”
The great future for physiological computing is theoretical at present based upon an assumption that the costs of the technology will be justified by the enhancements delivered by the systems. This “great future” will remain theoretical unless we develop a multidisciplinary research base where psychologists, engineers and computer scientists work together to develop this kind of technology; at the moment, the area seems somewhat fractured and that is a cause for concern with respect to whether the potential of this technology will be realised.
“Do you know of any study where experience and/or expectations of (potential) users have already been addressed? If so, what has precisely been investigated?”
There have been studies on the development of physiological computing systems which have been evaluated as prototypical systems – particularly with respect to adaptive automation and gaming technology. Most report that performance and engagement with the task (e.g. game or simulated aviation task) have been enhanced as a result. Where researchers have studied the use of biocybernetic adaptation on automation, they have reported a reduction of mental workload, which is one positive effect of system automation.
“Outside human-computer-interaction physiological computing is discussed as a means for biometric surveillance, e.g. to recognise criminal intent automatically. Do you think that surveillance is a likely, suitable and desirable application field for physiological and affective computing.
Please explain your point of view:”
Where there is the opportunity to monitor physiology, there is the potential to attach psychological concepts to the resulting pattern of data. I have written in the past that this whole area of technology was developed in the shadow of the polygraph. My own feeling is that detection of criminal activity is an unsuitable and undesirable application for physiological computing. It is undesirable because the monitoring of physiology that is essential for this technology should only be performed with full consent of the individual. I would also argue that users should retain full control over their own data streams with respect to sharing that information with others. Secondly, this kind of technology can only function with any degree of precision where the link between physiological activity and psychological concepts are fully validated in the field as well as the laboratory. I know of no such research work where the intention to engage in criminal activity has been identified in either the laboratory or the field. This may not mean that the development of such a surveillance system is unlikely – but I have strong reservations about whether it could work with any accuracy and without violating the data protection rights of individuals.
“Outside human-computer-interaction physiological computing is also thought of as a means to evaluate the user or consumer behaviour of software, media products, and even physical products. “Neuro-marketing” is one of the recent buzzwords in this context. How realistic is the use of physiological computing for consumer research and related purposes?
Please explain your point of view:”
Much depends on how we define consumer behaviour in this context. It should be certainly possible to capture psychophysiological responses to consumer items in terms of the level of physiological activation provoked by a specific item. It may also be possible to capture variables such as emotional valence and approach/avoidance motivation in response to items. These implicit responses may inform consumer research but testing would have to be highly systematic in order to yield clear and unambiguous results. For example, the type of consumer item under consideration must be carefully controlled – if we were to present a user with four items: a laptop, a mobile telephone, a handgun and a sex toy – the latter two would provoke the strongest response with respect to physiological activation regardless of whether users wished to purchase them or not.
“What are the major economic and social barriers to the diffusion of technologies incorporating biocybernetic adaptation (lack of demand, costs, data protection/privacy, problems related to the autonomy of human users, impact on self-perception of humans?
Please describe in detail:”
In my view, the greatest economic barrier would be to develop a system that unambiguously delivered enhanced value via physiological computing. The requirement for sensor technology would, at the moment, act as an additional peripheral device and the additional cost would have to be justified. The largest social barrier I think is the development of a totally unobtrusive, discrete and comfortable sensor technology. The other factors listed in the question such as data protection/autonomy/self-perception are not major issues until we have sensor technology that is acceptable to the general public and software that provides users with an incentive to wear these sensors.
“Where are the main scientific challenges in physiological computing research? In other words: What are the essential scientific breakthroughs required to advance the whole research field?
Please explain:”
The essential scientific developments fall into several categories: (1) as stated earlier, we need sensor technology that is comfortable, unobtrusive, capable of delivering robust data in the field and aesthetically appealing, (2) we need more work on real-time algorithms that capture psychophysiological measures whilst dealing with artifacts and confounds, (3) we need more work on evolutionary algorithms where signal classification is tailored to the individual user (so the algorithm grow with the user), and (4) innovations in software design where a repertoire of adaptive responses are available in order to tailor system response to the context of the user.
“Without doubt, thinking of future biocybernetically adaptive artefacts based on affective, physiological and neurophysiological computing, ethical concerns arise such as most subtle Orwellian surveillance scenarios, a nightmare for privacy, a horror of subconscious manipulation of thoughts, a loss of human autonomy because machines are guiding and nurturing us. Do you think any of these risks and threats is already part of our present day reality?
Which of these risks or threats are already here or imminent? Please explain your point of view:”
There is analogy between these kinds of concerns about physiological computing and existing systems, such as Amazon and iTunes, which monitor purchasing patterns in order to make recommendations. In both cases, surveillance is performed on the buying patterns of the person, which are fed to an algorithm, in order to make recommendations about future purchases. These systems reduce autonomy in order to nurture future buying patterns. However, these systems are tolerated because the benefits (saved time searching) attained outweigh the reduction of privacy and autonomy. Physiological computing systems must aim to the same kind of utilitarian trade-off in order to be acceptable to the public.
“There are different approaches to privacy. In a sense privacy and data protection are just two elements of the social controllability and thus acceptability of technology (Steinmüller 1971). In this view the relation between organizations (private or public) developing and deploying technology on the one hand and users or citizens on the other hand has to be designed. To this end “privacy by design”, privacy enhancing technologies, and legal regulations and requirements (e.g. of transparency or facilities for intervention) may be helpful. What is your understanding of privacy and what type of intervention and framing would you deem necessary and promising with respect to physiological computing applications?
Please tell:”
As I implied in an earlier answer, my view is that a contract for data sharing must be clearly defined prior to any interaction with a physiological computing system. This contract must clearly define: (1) who has access to the data stream besides the user, (2) whether the data is stored and if so, where, and who has access to this depository, (3) under what conditions data is shared with others, and (4) anonymity of the data record, i.e. when data is shared, can it be traced to an individual or IP address etc. In my view, control over (1) and (2) must always be determined by the user and the user should be able to make adjustments to these parameters at any time without any need for explanation. Like all information communication technologies, physiological computing would be susceptible to hacking and appropriate software security must be developed. This is the most personal and private category of data and my feeling is that users will desire clear information about privacy and data protection that places them in full control before they will be willing to use these systems.
“There are optimistic statements, e.g. by Rosalind Picard, that control of the system should always lie with the user. But this might be impracticable and inapplicable for two reasons: first the vision of an invisible, smart interface implies a need for at least temporary intransparency. Otherwise it could not fulfil its purpose. Second, many envisaged application areas assume interaction between a computer system and a person with reduced autonomy (e.g. handicapped persons, children).
In your opinion, how can we cope with these data protection dilemmas, if at all:”
l do not agree with your first statement – simply because a technology is pervasive and implicit, it does not imply that normal controls over data protection and privacy should be withdrawn. If my body network is being asked to share information with other systems (in my car or public transport, or from buildings) as I move around, I can either decide which systems to share information with as a pre-set or I can be asked. As for the second statement, this seems to me to be a generalisation. If handicapped persons are capable of interacting with this kind of technology for any task, then surely they are capable of exerting controls over data protection and privacy in the same way that a non-handicapped person could? As for children, the control of data sharing should be authorised by a parent/carer/teacher.
“A major concern of citizens confronted with visions of intelligent interfaces and neurophysiological computing is that computer systems will be able to “read their minds“ (among others, a finding of the EU project ETICA: http://moriarty.tech.dmu.ac.uk:8080/). How realistic is this vision from your point of view? How close are we to realizing it? Or does this discussion belong to the realm of highly speculative ethics (in the sense as discussed by Nordmann/Rip in Nature Nanotechnology 4, 273 – 274 (2009) http://www.nature.com/nnano/journal/v4/n5/pdf/nnano.2009.26.pdf)?
Please explain your point of view:”
I do not think that physiological computing systems can “read minds” but on the other hand, these issues are not completely speculative. A large part of the problem here is the media and the way in which research in this area is conveyed to the public, generally in the most sensationalist language and framed within the context of science fiction movies. People need to understand that measuring psychophysiological data will deliver a quantitative characterisation of a psychological state or experience. These data and the actual experience of the person are not equivalent. These data are quantitative, crude and relatively impoverished in comparison to the richness of embodied experience. In these terms, the computer cannot access our innermost thoughts, desires and intentions. However, there must be a connection between psychological activity and physiological reactivity for biocybernetic systems to function, therefore, it is true to say that the technology will ‘read minds’ but only with respect to a crude, impoverished representation of the mind. Nevertheless, the capability of technology to represent the mind of the user, even in this crude form, is likely to be a cause for concern – particularly if these data are shared with others or represented at the interface in a public space.
“Affective computing, physiological computing and neuro-physiological computing are related concepts. It is however not clear if the difference between the three concepts requires separate ethical considerations, e.g. the ethics concerns with respect to brain-computer interfaces may be rather different from those issues of computer applications displaying emotional cues. How would you frame and approach the unity of concerns vs. the uniqueness of each concept?
Some short hints are sufficient here:”
All these systems (including BCI) involve streaming physiological data from the user – therefore, fundamental ethical considerations regarding privacy and data protection, associated with data collection are identical. For physiological computing, ethical issues surround how data on psychological experience and health are displayed at the interface and how these measures and derivates are shared with other users. In this respect, I do not see a differentiation between the three terms used above. All are concerned with monitoring spontaneous changes in physiology to provide a dynamic representation of the user state in order to inform software adaptation.
Nice to read, and surely interesting to have yourself quoted at yourself.
I am interested about your response to the privacy issue. You suggest that in this case there should be a “contract for data sharing” that is entered into. However, don’t you see a potential problem here that is similar to the EULA problems we have today with many software packages? In that they are so long and so common that nobody reads them anyway – yet they often contain contracts like you suggest outlining what will be done with personal data.
I personally would favour more of a privacy by design type solution as mentioned in the question. All devices should be designed to protect personal privacy first, and then only by explicitly opting in in a very clear and meaningful way would personal information of any type be made available to companies/corporations/governments (if at all).
Hi Ben – I was initially tempted to disagree with myself – but good sense got the better of me. With respect to the privacy issue, I agree about the EULA problems, hence I used the word ‘clear’ in my response. I was not thinking of the EULA type contract that nobody reads. The privacy by design solution is perhaps a better starting point for data protection, but it is also far from perfect. For instance, I recently signed up for Google+ which immediately requested my location – I consented without much thought but then I wondered what happens if I deny their request – will certain services be denied to me? I do worry that users could be denied access by service deniers if they refuse to share their data. But my main point is that data access must be completely under the control of the user – whichever mechanism best serves that need for this data remains to be defined. Best, Steve
Privacy by design is not going to remove the need for a contract between the user and the company holding their personal data (whether as an online or offline service). It simply indicates that they take their privacy policy seriously.
As to whether the contract may leave the user’s data open to abuse their are several instances where the public at large have forced the companies hand to revise the more dubious elements of such policies. A recent example was the appearance of a new clause in DropBox’s Terms of Service (ToS) which suggested they could use your personal data for whatever they wanted. This was subsequently revised after their users complained. Given that I’m not too worried about companies hiding dubious data sharing policies inside their contracts I might miss by clicking “I Agree” without a second thought, though I do actually read them :).
A more troubling question from my point of view is how do we secure the interpretation of the user’s physiological data over the web. The online storage of physiological data has primarily come from medical and fitness related applications which have a very strict form of interpretation given the stakeholders who have access to this information e.g. doctors. However as other forms of data are collected and wanted to be shared we run the risk of losing the ability to control that interpretation as new stakeholders become involved. For example imagine playing L4D2 and each player can see the others heart rate. During an attack the interpretation is relatively straight forward, activation to the zombie hoard. However during the quiet periods the context of what the player characters are each looking at and the chatter between players adds a new layer of interpretation to the changes in the displayed signals.
The question then becomes, if control is required how much can we control for and is that which we can’t an acceptable risk of the systems use.
For more information on this topic check out Issues inherent in controlling the Interpretation of the Physiological Cloud and a lie detection game for Wii Vitality.
I have to partily disagree Kiel. Privacy by design means you take a starting point of never holding the data in the first place. One such example is some of the pay-as-you-drive insurance GPS devices. These devices are a potential privacy nightmare as they rely on tracking when, where, and how people are driving in order to calculate the insurance they must pay.
There are several “privacy by design” solutions to this however, which are technical, but basically come down to the fact that the insurance company *never* gets any of the raw data, rather they get already transformed (into cost) privacy free outcome based information instead.
So in your L4D2 example the data should should, by default, never be stored longer than it takes to display the heart rate in that particular moment. That of course could theoretically be captured by someone at that point and turned into a stream of information though. But if possible that should not, by default, be the design of the system.
But I do say only partly disagree, because for some functions you will have to store data, or even the user will want it stored. In that case I agree that a nice clear, contract/option toggle is the next best step.
btw, Steve isn’t your example about google+ actually highlighting a problem with the contract solution instead of the privacy by design issue? Google immediately offered you a contract (to track you or not) yet it is not clearly worded enough for you to know what it means. Admittedly it follows a privacy design solution (of not tracking you automatically) but essentially isn’t it is an example of what is wrong with offering a badly worded/presented contract?