Brain-computer interface could unlock minds silenced by stroke and ALS


Brain and computer board

Not even Stephen Hawking used the kind of sci-fi communication interface that University of Kansas neuroscientist Jonathan Brumberg is developing. Hawking used a cheek muscle to control his voice device. But Brumberg wants to give individuals with no voluntary movement at all the ability to control a communication device via a brain-computer (BCI) interface—with their thoughts alone. And what would set this apart from other speech BCIs is that it would allow an individual to speak through a speech synthesizer in real time.

BCIs are increasingly being used in scientific research and therapeutic interventions for individuals with ALS, like Hawking, as well as for those with locked-in syndrome due to brain stem stroke or other conditions, said Brumberg, assistant professor of speech-language-hearing.

Brumberg is testing a prototype BCI that can decode an individual’s brain waves recorded via a non-invasive 60-channel electroencephalography cap into sound frequencies for control of a vowel synthesizer with instantaneous auditory and visual feedback.

“The actual BCI is the software because it is the algorithm that transforms brain waves into acoustic and visual representations of speech,” said Brumberg, whose Ph.D. is in the area of computational neuroscience of speech motor control.

The purpose of the current project, funded by a three-year grant from the National Institute on Deafness and Other Communication Disorders, is to demonstrate the feasibility of vowel sounds as a feedback mechanism for a speech BCI.

Why vowel sounds? Brumberg came to KU three years ago having had the rare opportunity to get data on speech production directly from the brain of an individual, who had electrodes implanted in the speech-motor area of his brain, as he thought about producing vowel sounds. This was data that had never been looked at before.

“It became clear to us that we could just reverse the process and see if what we thought was being represented by brain activity could then be generated by activity,” Brumberg said.

Brumberg and his collaborators discovered that there were reliable brain signals that were related to certain acoustic features of vowels that a BCI computer model could potentially generate through a speech synthesizer.

And, incidentally, it is possible to say things without consonants, such as, “I owe you a yo-yo,” said Brumberg.

“If we can provide a device that synthesizes speech, the person with locked-in syndrome who has intact perception perceives what they produced,” he said. “A key part of this perception-production loop is the ability of the perceptual system to tell the production system if it’s correct. And if it’s incorrect, how to correct it.”

Brumberg hopes that by providing speech information from the auditory system to the speech perception areas of the brain, the production system would then be controlling the synthesizer and the intact connection between those two regions would then further tune the system for restoring fluent speech.

Ultimately, said Brumberg, the project will be the basis for future BCI designs that could run on mobile devices using low-dimensional speech synthesizers for continuous production of both consonants and vowels using non-invasive EEG.

“The outcome of this research has the potential to improve the quality of life for mute, paralyzed patients, or others lacking the ability to speak and help inform the development of improved brain-computer interface applications for communication,” said Brumberg.