When I first heard the term ‘brain-to-brain interfaces’, my knee-jerk response was – don’t we already have those? Didn’t we used to call them people? But sarcasm aside, it was clear that a new variety of BCI technology had arrived, complete with its own corporate acronym ‘B2B.’
For those new to the topic, brain-to-brain interfaces represent an amalgamation of two existing technologies. Input is represented by volitional changes in the EEG activity of the ‘sender’ as would be the case for any type of ‘active’ BCI. This signal is converted into an input signal for a robotised version of transcrannial magnetic stimulation (TMS) placed at a strategic location on the head of the ‘receiver.’
TMS works by discharging an electrical current in brief pulses via a stimulating coil. These pulses create a magnetic field that induces an electrical current in the surface of the cortex that is sufficiently strong to induce neuronal depolarisation. Because activity in the brain beneath the coil is directly modulated by this current, TMS is capable of inducing specific types of sensory phenomena or behaviour. You can find an introduction to TMS here (it’s an old pdf but freely available).
A couple of papers were published in PLOS One at the end of last year describing two distinct types of brain-to-brain interface between humans.
One system was created by Rajesh Rao and colleagues at the University of Washington in Seattle. This team had their sender engage in motor imagery to modulate the mu rhythm of the EEG in the sensorimotor cortex. The sender was one of three ‘pairs’ of six participants who took part in the study, who were required to ‘play’ a simple missile defence-style computer game that involved firing a cannon to halt an incoming rocket. In order to fire the cannon at the right time, the sender had to imagine moving his or her right hand, this act of motor imagery prompted a suppression of the mu rhythm, which activated the TMS coil placed over the head of the receiver. This participant had the far easiest job of sitting in a separate room and staring at a blank wall until the TMS coil stimulated his motor cortex, resulting in an involuntary jerk of the right hand, which activated a touchpad that issued the command for the cannon to fire. You can read the full paper here that appeared in PLOS One in November 2014.
The second paper was published in PLOS One in August 2014 and represented the work of Giulio Ruffini and his colleagues at StarLab in Barcelona. This brain-to-brain interface also relied on changes in the EEG due to motor imagery on the part of the sender and a TMS coil placed on the head of the receiver. But it was also different from the system created in Seattle in a number of ways. First of all, EEG data from the sender was recorded in India and it was used to activate a TMS coil over the receiver who was situated in France. The sender was also trained to generate two types of motor imagery, one associated with movement of the hands and a second that corresponded with foot movement. The distinct EEG signatures of hand and foot imagery were used to create a binary communication system of 1 and 0 respectively.
Things get more interesting with the TMS stimulation metered out to the receiver in France. This time, the coil was placed over the visual cortex of the brain in order to create a phosphene that could be ‘seen’ by the receiver. Phosphenes are defined as the experience of seeing light when light has not actually entered through the eye. If you close your eyes and press your knuckles onto your closed eyes, you’ll see spots of light dancing around your field of vision – they are called pressure phosphenes. Stimulating the visual cortex via TMS created the same kind of sensory experience. In this case, when the blindfolded receiver had an impression of spot dancing in his or her visual field, they know that a 1 has been transmitted by the sender.
The clever thing about the system created by Ruffini and his colleagues is that the sender can actively transmit two distinct bits of information, a 1 or a 0 – and the authors put that capacity to good use by training the sender to transmit short streams of ones and zeros in order to compose a word, just like Morse code. So, one of the two pairs of participants who used the system were able to transmit the words “hola” (hello) and “ciao” (goodbye) as a 5-bit cypher. The full PLOS One paper is available here.
After both papers were published, there was a minor spat that played out on the IEEE Spectrum website as the Ruffini system was dismissed as a ‘stunt’ by one researcher in the UK amidst a degree of disagreement between the two teams. Rao being irked that an online paper published in July 2013 was not cited by Ruffini et al whilst the latter claimed that Rao’s system was not really brain-to-brain because the receiver was not consciously aware of the signal transmitted by the receiver.
There are elements of truth on both sides of this forceful exchange of opinion. The decision by Ruffini and colleagues to locate sender and receiver in different continents was superfluous as far as experimental control was concerned and has the appearance of showmanship. But dismissal of their work as a ‘stunt’ is a criticism too far – the system created by Ruffini et al provided a covert channel of communication and their binary code is capable of creating complex messages as they demonstrated. On the other hand, his claim that the system created by Rao et al was not really brain-to-brain is preposterous. The sender in the Rao et al study was consciously aware of his hand jerk shortly after the TMS activated. In addition, Ruffini’s sender had no real conscious choice in whether or not he saw the phosphenes, the reception of the message by the sender lacks any conscious intention in both cases.
But there is something rudimentary about system created by Rao et al where motor imagery is used to push a switch that forces the hand of the receiver to twitch without any conscious intention. In some ways, it is reminiscent of a performance by the artist Stelarc who allowed strangers to press buttons to automatically activate electrical probes that were inserted into his muscles, in order to create spasmodic movement in his hands and arms. In Stelarc’s case, he was making a point about subverting the element of human agency and brain-to-brain interfaces have the same disruptive potential.
On the positive side, Rao et al created a system where successful communication between sender and receiver was achieved under real time pressure – because the signal to fire the cannon had to be coordinated with the appearance of a rocket. This is significant because ‘live’ communication is often time-limited, although in this case, pressure to communicate in good time falls squarely on the shoulders of the sender.
So, what kind of applications are we talking about for brain-to-brain interfaces? Rao et al mention the direct communication of fine motor control from expert to novice. Like the Stelarc piece, this application resembles a piece of human puppetry where, a piano teacher for instance, might show her pupil how to play a certain chord. The obvious application is communication by stealth, a point underpinned by the fact that the US Army funded the research reported by Rao and colleagues. Because the StarLab system has the possibility of conveying more complex messages, applications centred around the possibility of sending emails or tweets directly from one brain to another. If you don’t react to that scenario with alarm, just think about how you would block people or operate a spam filter for that particular application.
It is easy to understand why brain-to-brain interfaces are capable of grabbing the headlines. We are all familiar with the concept of telepathy and both systems represent a technological variant of telepathic communication. This is a genuinely exciting idea, as Arthur C Clarke said “any sufficiently advanced technology is indistinguishable from magic.” And there is something magical about this application. But we should keep grounded and remember that the systems represent two pieces of existing tech wedded together. That is not a problem in itself, as mash-ups go, it’s a creative combination. But the number of usable applications in the short-term may be limited with this kind of amalgamation.
One alternative may be to explore the same scenario using covert measures of emotional state as an alternative output from the sender. For example, imagine seeing an emotional state from another person represented as a phosphene, perhaps even different types of colour for different moods. Or something less whimsical, such as an aircrew receiving a flash of phosphene to covertly provide feedback that the pilot is overloaded.
There are other possibilities, we know that certain patterns of brain activation are associated with motivation and emotion – and there are examples of research where TMS is used to create and suppress those states (see this paper as an example). Can we envisage a pairing of these technologies where EEG data from a non-depressed sender is transmitted to a TMS coil above the head of a depressed receiver as a therapeutic intervention? Or using a signal from a happy and healthy sender as an input for a brain stimulation chip implanted into the cortex of a person with a clinical condition? (DARPA are currently looking at brain stimulation to combat mental illness in the military – see press release here)
In the meantime, it’s safe to assume that the race for brain-to-brain interfaces capable of two-way communication is already underway. And that means, for better or worse, more magic.