Brain-computer interface (BCI) sounds like something straight out of science fiction — BCI uses electrodes to transmit your brain’s thoughts into actionable commands using a computer. Controlling a robotic arm with your mind or materializing your thoughts into full sentences on a computer screen without opening your mouth might sound like futuristic technology. However, BCI clinical studies have been underway for several decades, and we’re not as far from wide adoption as one might think.
Now enabled by artificial intelligence (AI) and machine learning (ML), BCI is becoming more efficient and accurate and is poised for mass market adoption in the near future. According to Globe Newswire, BCI is predicted to reach a valuation of US$5.48 billion by 2030 and a compound annual growth rate (CAGR) of 14.72%.
With BCI headed towards the mass market, we want to share with CIOs and other leaders how BCI can bring limitless innovation to every organization, whether in the healthcare sector or beyond. We’ll divulge how this technology works, and we’ll explore clinical and non-clinical use cases, security considerations, and future directions.
Simply put, brains are full of neurons that use electrical and chemical signals to communicate with one another to coordinate thoughts, feelings, and actions. The brain fires in a predictable pattern every time you smile, make a fist, or think about writing different letters of the alphabet.
There are three types of BCI: noninvasive, semi-invasive, and invasive. All three types analyze signals from the cerebral cortex, which, according to Johns Hopkins Medicine, “initiates and coordinates movement” and enables “speech, judgement, thinking and reasoning, problem-solving, emotions and learning.”
Metal electrodes (typically silver) from a BCI setup act as conductors for these electrical signals to be fed through attached wires to a computer (or other output device) and translated into commands. AI and ML have shortened the time required and improved the accuracy in the analysis and decoding of the electrical signals in BCI.
Electroencephalography (EEG) involves placing electrodes on the scalp, making it noninvasive. According to the Neurofeedback Alliance, EEG mostly picks up “excitatory postsynaptic potentials [EPSP] from apical dendrites of massively synchronized neocortical pyramidal cells.” EPSP is essentially what creates the likelihood of a postsynaptic neuron being able to fire an action potential. Although the least risky of the three types of BCI, EEG is the weakest at detecting signals because it has to pick them up through the meninges (the protective covering directly on the brain), skull, and scalp.
Electrocorticography (ECoG) operates very similarly to EEG, but instead of placing the electrodes on the scalp, they go directly on the cerebral cortex. While the signals (synchronized post-synaptic potentials) are more likely to come in clearly with ECoG than with EEG, any surgery involving opening the skull (craniotomy) leaves patients susceptible to brain damage and infection, among other risks.
Intracortical microelectrodes are the most invasive of the three types of BCI. These microelectrodes are inserted directly into the cerebral cortex and measure the intraparenchymal signals. Due to its direct connection to the gray matter in the cerebral cortex, intracortical microelectrodes yield the strongest signals.
Like EcoG, intracortical microelectrodes come with the same risks as any craniotomy. However, intracortical microelectrodes also come with the added risks of instigating the body’s reaction to a foreign object and of scar tissue buildup in the brain from the invasive device.
So far, the majority of BCI research has been conducted for healthcare use cases to improve patients’ quality of life.
BCI can be life-changing for individuals who have lost mobility and/or speech, whether due to neurodegenerative disorders like Amyotrophic Lateral Sclerosis (ALS) or Parkinson’s, spinal cord injuries, amputations, or strokes.
In collaboration with the University of Minnesota in 2019, researchers from Carnegie Mellon University made a breakthrough in robotic device control. An individual hooked up to EEG electrodes was able to control a robotic arm with their mind. Carnegie Mellon reports: “Using a noninvasive brain-computer interface (BCI), researchers have developed the first-ever successful mind-controlled robotic arm exhibiting the ability to continuously track and follow a computer cursor.” This could revolutionize prosthetic limbs and grant individuals their mobility and independence.
In a 2022 study by Serruya and colleagues, the researchers were able to help restore gross motor skills in a patient who had experienced a stroke. They successfully trained the patient to restore movement in their hand using microelectrode arrays implanted in the precentral gyrus (where the primary motor cortex is located), commands on a computer screen, and a motorized brace worn on the hand. Serruya et al. write: “The participant’s ability to rapidly acquire control over otherwise paralyzed hand opening, more than 18 months after a stroke, may justify development of a fully implanted movement restoration system to expand the utility of fully implantable BCI to a clinical population that numbers in the tens of millions worldwide.”
BCI can also change the lives of individuals who can no longer vocalize their thoughts and feelings due to illness or injury by enabling them to communicate.
In a 2021 study, Stanford University researchers used BCI to enable a man with paralysis to materialize his thoughts of handwriting into letters on a computer screen. ML also helped during the study, as autocorrect boosted the raw accuracy of the typing.
Willet et al. say of their results, “Here we developed an intracortical BCI that decodes attempted handwriting movements from neural activity in the motor cortex and translates it to text in real time, using a recurrent neural network decoding approach. With this BCI, our study participant, whose hand was paralysed from spinal cord injury, achieved typing speeds of 90 characters per minute with 94.1% raw accuracy online, and greater than 99% accuracy offline with a general-purpose autocorrect.”
There are also studies of BCI that focus on thoughts of verbalizing speech rather than on handwritten communication. ZDNet refers to a study that the University of California San Francisco (UCSF) conducted on a patient who had experienced a stroke: “When the user tried to say a sentence word by word, the language model would predict how likely it was that he would be trying to say each of the 50 words and how those words were likely to be combined in a sentence — for example, understanding that, 'how are you?' is a more likely sentence than 'how are good?', even though both use fairly similar speech muscles — to give the final output of real-time speech. The system was able to decode the participant's intended speech at rate of up to 18 words per minute and with up to 93% accuracy.” This system has room for improvement at only 18 words per minute (which is significantly slower than the average person’s speech), but research is ongoing, and the results are still life-changing for the patient.
Like any other computer-based technology, BCI faces security concerns. BCI is especially susceptible to interference from cyberattacks. A 2020 study by Zhang et al. showed that “P300 and steady-state visual evoked potential BCI spellers are very vulnerable, i.e., they can be severely attacked by adversarial perturbations, which are too tiny to be noticed when added to EEG signals, but can mislead the spellers to spell anything the attacker wants.”
Before BCI can be widely accepted for clinical and non-clinical use cases, experts need to proactively address security concerns. A 2021 article by Ajrawi et al. proposes adding semi-active Radio Frequency Identification (RFID) tags “outside the brain on the scalp” to “transmit the collected brain activities wirelessly” to a device SC (Scanner Controller) consisting of an integrated mini-reader and timer.
When considering what the human mind could materialize with thoughts, the possibilities for BCI reach far beyond current clinical use cases.
BCI can not only enable those with lost mobility to game, but this technology can also be used to create fully immersive experiences while gaming.
The Future of Privacy Forum speaks to BCI’s potential with gaming in the near future: “BCIs could augment existing gaming platforms and offer players new ways to play using devices that record and interpret their neural signals. Current examples of BCI gaming combine neurotechnology with existing gaming devices or platforms. These devices attempt to record the user’s electrical impulses, collecting and interpreting the player’s brain signals during play.” In the more distant future, “BCI games may offer greater immersion by combining neurodata with other biometric and psychological information, which could allow players to control in-game actions using their conscious thoughts.”
A bidirectional BCI setup could also make gaming far more immersive than an AR/VR headset alone. Such a setup could not only receive a gamer’s neural activity and enable the game to respond accordingly, but it could also do the reverse by stimulating the gamer’s neurons to amplify responses to stimuli and to emotional responses like fear or intrigue.
A Harvard Business Review article mentions that “BCIs can now be used as a neurofeedback training tool to improve cognitive performance,” with the author expecting to see “a growing number of professionals leveraging BCI tools to improve their performance at work.”
The HBR article also provides current use cases and future directions for BCI in the workplace: “Some BCI companies have already used EEG to analyze signals of drowsy driving. Companies with workers who operate dangerous machinery may require their workers to be monitored in the same ways. I believe that someday, it will be mandatory for pilots and surgeons to wear a BCI while working.”
In addition to contexts where lapses in attention could cost lives, BCI has potential to improve performance in other ways. For example, BCI could be used as a neurofeedback tool giving a presenter real-time feedback about the audience’s engagement and comprehension of the content, enabling the presenter to adapt on the fly and maximize their message’s reach. BCI could also prove invaluable in user experience and product testing scenarios, where researchers could analyze participants’ thoughts, sensations, and emotions rather than relying solely on participants vocalizing their thoughts.
While technologies like automation are making headway in worker productivity, BCI has untapped, unlimited potential use cases to increase productivity and innovation in the workplace.
Innovation leaders from every industry — not just healthcare — need to be aware of brain-computer interface technology and think of how their organizations might use it. Not only will it change the way certain conditions are treated in a clinical setting, but it will also revolutionize the way we work and play. With all of the recent advancements made in BCI studies, especially thanks to AI and ML, the BCI revolution is coming soon — whether we’re ready for it or not.
Coming from a background in conducting original ethnographic research, Mary-Kate brings a humanities lens to the technology she writes about. She’s passionate about using her background in primary and secondary research to bring innovative solutions to clients in both the digital experience and automation spaces. Outside of work, Mary-Kate enjoys both traveling and hiking.