Semantic Decoder: AI Converts Thoughts into Written Words

The Semantic Decoder, a revolutionary artificial intelligence system, can translate brain activity into continuous text, enabling communication for those unable to speak due to conditions like stroke.

Researchers at The University of Texas at Austin have developed a new AI system called a semantic decoder, which can translate a person’s brain activity into a continuous stream of text. This system could help people who are mentally conscious yet unable to speak physically, such as those debilitated by strokes, to communicate intelligibly again. The study, published in the journal Nature Neuroscience, was led by Jerry Tang and Alex Huth, a doctoral student in computer science and an assistant professor of neuroscience and computer science at UT Austin respectively. This breakthrough could revolutionize the way people with speech impairments communicate.

Advancements in Neural Language Decoding: Potential for Speech Impairment Support

Researchers have developed a noninvasive, transformer-based language decoding system that can interpret brain activity from an fMRI scan. The implant-free system trains participants by listening to podcasts during scanning, without relying on predetermined word lists. This breakthrough technology could revolutionize how we interact with machines, allowing us to communicate with them using only our thoughts. It could also help people with speech impairments, allowing them to communicate more easily.

Semantic Decoder: Applications for Individuals with Communication Disorders

This technique is non-invasive and surpasses previous methods by generating longer and more comprehensive sentences. Training involves participants listening to podcasts in the scanner instead of using word lists. Decoder experiments accurately translated thoughts into text, capturing phrases like “She hasn’t started learning to drive yet.” This technology has diverse applications, aiding individuals with communication disorders and offering a novel computer interaction method.

The technology is only effective with cooperative participants who have willingly participated in training the decoder. Results for individuals not included in the decoder’s training were incomprehensible, and resistance from trained participants rendered results useless. The researchers prioritize preventing misuse of the technology and ensuring it is used consensually and beneficially for individuals.

Potential Transfer to Portable Brain-Imaging Systems

The fMRI-based system was tested on participants watching silent videos during scanning to assess its effectiveness. The results showed that the system was able to accurately describe certain events from the videos. While the system is currently not practical for use outside of the laboratory due to its reliance on the time needed on an fMRI machine, the researchers believe that it could be transferred to other, more portable brain-imaging systems such as functional near-infrared spectroscopy (fNIRS). FNIRS measures blood flow in the brain and could provide a lower resolution than fMRI.

The research received support from the Whitehall Foundation, the Alfred P. Sloan Foundation, and the Burroughs Wellcome Fund. Amanda LeBel, a former research assistant, and Shailee Jain, a graduate student in computer science, also contributed. Alexander Huth and Jerry Tang have filed a PCT patent application associated with this research.

Previous articleBCSC: New Study Finds AI Can Assist Doctors in Breast Cancer Risk Predictions
Next articleChinese Yuan: A Key Factor for Crypto Traders to Watch
Chris Griffin
Chris has had a career as an advisor to the tech industry, incubating start-ups in the tech industry. Welcoming Chris to contribute his expertise covering the latest things he sees in blockchain