Can Physiological Computing Create Smart Technology?

smart

The phrase “smart technology” has been around for a long time.  We have smart phones and smart televisions with functional capability that is massively enhanced by internet connectivity.  We also talk about smart homes that scale up into smart cities.  This hybrid between technology and the built environment promotes connectivity but with an additional twist – smart spaces monitor activity within their confines for the purposes of intelligent adaptation: to switch off lighting and heating if a space is uninhabited, to direct music from room to room as the inhabitant wanders through the house.

If smart technology is equated with enhanced connectivity and functionality, do those things translate into an increase of machine intelligence?  In his 2007 book ‘The Design Of Future Things‘, Donald Norman defined the ‘smartness’ of technology with respect to the way in which it interacted with the human user.  Inspired by J.C.R. Licklider’s (1960) definition of man-computer symbiosis, he claimed that smart technology was characterised by a harmonious partnership between person and machine.  Hence, the ‘smartness’ of technology is defined by the way in which it responds to the user and vice versa.

One prerequisite for a relationship between person and machine that is  cooperative and compatible is to enhance the capacity of technology to monitor user behaviour.  Like any good butler, the machine needs to increase its  awareness and understanding of user behaviour and user needs.  The knowledge gained via this process can subsequently be deployed to create intelligent forms of software adaptation, i.e. machine-initiated responses that are both timely and intuitive from a human perspective.  This upgraded form of  human-computer interaction is attractive to technology providers and their customers, but is it realistic and achievable and what practical obstacles must be overcome?

Continue reading

Share This:

CFP – Special Session at ICMI 2011 “BCI and Multimodality”

The deadline for submissions to this special session has been extended to May 20th

Anton Nijholt from University of Twente and Rob Jacob from Tufts University are organizing a special session at ICMI 2011 on “BCI and Multimodality”. All ICMI sessions, including the special sessions, are plenary. Hence, having a special session during the ICMI conference means that there is the opportunity to address a broad audience and make them aware of new developments and special topics.  Clearly, if we look at BCI for non-medical applications a multimodal approach is natural. We can make use of knowledge about user, task, and context. Part of this information is available in advance, part of the information becomes available on-line in addition to EEG or fNIRS measured brain activity. The intended user is not disabled, he or she can use other modalities to pass commands and preferences to the system, and the system may also have information obtained from monitoring the mental state of the user. Moreover, it may be the case that different BCI paradigms can be employed in parallel or sequentially in multimodal (or hybrid) BCI applications.
Continue reading

Share This: