Neurofeedback and the Attentive Brain

glasses

The act of paying attention or sustaining concentration is a good example of everyday cognition.  We all know the difference between an attentive state of being, when we are utterly focused and seem to absorb every ‘bit’ of information, and the diffuse experience of mind-wandering where consciousness flits from one random topic to the next.  Understanding this distinction is easy but the act of regulating the focus of attention can be a real challenge, especially if you didn’t get enough sleep or you’re not particularly interested in the task at hand.  Ironically if you are totally immersed in a task, attention is absorbed to the extent that you don’t notice your clarity of focus.  At the other extreme, if you begin to day-dream, registering any awareness of your inattentive state is very unlikely.

The capacity to self-regulate attentional focus is an important skill for many people, from the executives who sit in long meetings where important decisions are made to air traffic controllers, pilots, truck drivers and other professionals for whom the ability to concentrate has real consequences for the safety of themselves and others.

Technology can play a role in developing the capacity to regulate attentional focus.  The original biocybernetic loop developed at NASA was an example of how to incorporate a neurofeedback mechanism into the cockpit in order to ensure a level of awareness that was conducive with safe performance.  There are two components within type of system: real-time analysis of brain activity as a proxy of attention and translation of these data into ‘live’ feedback to the user.  The availability of explicit, real-time feedback on attentional state acts as an error signal to indicate the loss of concentration.

This article will tell a tale of two cultures, an academic paper that updates biocybernetic control of attention via real-time fMRI and a kickstarter project where the loop is encapsulated within a wearable device.

Narbis is a kickstarter project that consists of a pair of spectacles and an EEG sensor.  The EEG electrodes are designed to be attached to the frontal or fronto-central area along the midline.  These data are analysed via propriety software and used to control the opacity of the glasses lens.  If the person loses their attentional focus, the glasses go dark, which serves as an error signal.  For more info, see this neurogadget article or their Facebook page.  This system is portable and probably will be affordable for most users.  It is important to note that if you were focusing on a visual task, such as proofreading text, then the error signal (the glasses becoming opaque) is designed to interfere with your ability to complete the task at hand.  This mechanic is important because keeping the glasses transparent functioning as a reward to motivate the development of skill and the darkening of the glasses is an aversive experience.  Therefore, like the original biocybernetic loop from NASA, the system has two functions: to mirror the current attentional state and to actively promote a specific pattern of brain activity.

The second example of a closed-loop for attention training comes from this paper by deBettencourt and her colleagues published in Nature Neuroscience in March this year.  deBettencourt et al used real-time fMRI in conjunction with a selective attention task.  In short, participants viewed a composite image where two types of visual stimuli (faces or scenes) were superimposed over one another.  The authors had trained a multivariate pattern analysis to distinguish patterns of brain activity during scene-viewing vs. face-viewing.  This classifier received real-time data from a 3T MRI scanner and decoded the category of image that the participant was currently attending to.

If the participant was instructed to attend to scenes, he or she must press a button when they saw an indoor scene as opposed to an outdoor scene.  This task was complicated by an image of a face superimposed over the target image – but if the classifier detected a pattern of fMRI activity that was consistent with scene viewing (as opposed to face viewing), the image of the scene was strengthened at the expense of the image of the face during the next presentation.  This adjustment of the composite image obviously favoured improved performance for the correct identification of indoor scenes, and if “scene-viewing brain activation” was sustained, the adjustment was repeated for the next stimuli and so on.

There are lots of aspects of this paper that are worthy of discussion.  The interested reader is referred to the original article or this commentary in the same journal, there is also another commentary available here in Trends in Cognitive Science.  For the purpose of the current article, I’ll focus on two aspects: the assessment of performance benefits and the predictive validity of their measure.

deBettencourt and her colleagues conducted their study as a A-B-A design – they ran a pre-test of performance (correct identification of indoor scenes from outdoor scenes), then subjected participants to training with real-time fMRI in a closed loop and finally ran a third test of performance that served as a post-test.  Therefore, exposure to closed-loop training via real-time fMRI served as an intervention.  The authors reported a significant increase of sensitivity (identification of target vs. non-target) from pre-test to post-test when participants were trained using real-time feedback from fMRI.  They also included a no-feedback group who were subjected to the same regime but received only stable blocks during the training phase – their performance did not significantly improve as a result.  A second control group were included who received neurofeedback from another participant (this is sometimes called a yoked control) – once again, the increase of sensitivity from pre- to post-test was insignificant.  Finally, the authors included a third control group where reaction time (RT) was used as a proxy measures of attention instead of fMRI; in this case, slower RT was used to increase the salience of the target image as faster RTs were associated with error.  Once again, the increase of performance from pre- to post-test phase did not reach statistical significance.  The latter comparisons are important because they demonstrated that an improvement of performance was dependent on real-time feedback from brain activation for each individual.

During the no-feedback condition when participants received stable blocks (i.e. ratio of faces vs. scenes was constant throughout), the authors used their classifier to predict performance.  They found that their classifier had a significant positive correlation with performance accuracy.  In other words, their measure of brain activity was highly predictive of actual performance, and therefore, it is safe to assume that the improvements in performance described in the last paragraph resulted directly from modulation of their measure of brain activity.

It may seem unfair to compare a substantial experiment from a neuroscience lab to a kickstarter project.  By definition, the latter do not have the same facilities or available expertise.  However, by the same token, deBettencourt et al are not in the business of system development, one purpose of their paper is to demonstrate a proof-of-concept.  But, note the emphasis of the deBettencourt et al paper on: (1) demonstration of performance benefits, and (2) predictive validity of their measure of brain activity.

There are lessons to be learned here for the academic and developer community alike.  For the former, the paper provides an excellent blueprint of how to evaluate your closed-loop system and emphasises the importance of choosing a measure of brain activity that is predictive of task performance.

Moving on to developers – let us leave aside the sticky issue of signal quality for commercial EEG technology and focus on the propriety algorithms encoded within the software for these systems, which leaves us with  at least two big questions:

(1) Is there any available evidence that propriety algorithms are predictive of any aspect of cognition, emotion or motivation?

(2) Has anyone used propriety algorithms in a closed-loop context to produce a demonstrable change in human behaviour that can be solely attributed to the brain activity indexed by that algorithm?  This is a real challenge due to (1).

In my opinion, developers ought to ask themselves why the burden of proof lies solely  on academics, like deBettencourt and her colleagues, when they are not the ones asking customers to part with hard cash for a product.

 

Share This: