“Mind reading” may be about to become a reality – and in the most literal sense conceivable, as a new discovery from researchers “BrainGPT” at the University of Technology Sydney’s GrapheneX-UTS Human-centric Artificial Intelligence Centre sees thoughts turned into words on a screen.
“This research represents a pioneering effort in translating raw EEG waves directly into language, marking a significant breakthrough in the field,” said Ching-Ten Lin, Distinguished Professor at the UTS School of Computer Science and Director of the GrapheneX-UTS HAI Centre.
“It is the first to incorporate discrete encoding techniques in the brain-to-text translation process, introducing an innovative approach to neural decoding,” noted Lin, who led the study. “Integrating large language models is also opening new frontiers in neuroscience and AI.”
In a study that has been selected as a spotlight paper at the NeurIPS conference, an annual meeting of researchers in artificial intelligence and machine learning, participants silently read passages of text. At the same time, an AI model called DeWave – using only their brainwaves as input – projected those words onto a screen.
While it’s not the first device to be able to convert brain impulses into English, it’s the only one so far that needs neither brain implants nor access to a full-on MRI scanner. It also has an advantage over predecessors that need additional input such as eye-tracking software, the researchers explain, since the new technology can be utilized with or without such additions.
Instead, users need simply to wear a cap that captures their brain activity using an electroencephalogram (EEG) — considerably more practical and handy than an eye-tracker (not to mention an MRI machine). That means the signal was a bit louder than that acquired from implants, the researchers said – however even so, the gadget worked very well in experiments. Accuracy assessments using the BLEU technique – a way to compare the resemblance of an original text to a machine-translated output by assigning it a score between 0 and 1 – placed the new tech at roughly 0.4.
That, unfortunately, isn’t as excellent as some of the other solutions that rely on these more intrusive technologies. “The model is more adept at matching verbs than nouns,” noted Yiqun Duan, first author on the article supporting the study – and “when it comes to nouns, we saw a tendency towards synonymous pairs rather than precise translations, such as ‘the man’ instead of ‘the author’.”
“We think [these errors are] because when the brain processes these words, semantically similar words might produce similar brain wave patterns,” Duan added.
However, the researchers think they can enhance this accuracy up to 0.9 – a level similar to standard language translation tools. They already have an edge, they think, thanks to carrying out their experiments on 29 subjects — it may not seem like a lot, but it’s an order of magnitude larger than many other decoding tech studies.
“Despite the challenges, our model yields meaningful results,” Duan said, “aligning keywords and forming similar sentence structures.”
The findings were presented at the NeurIPS conference and a preprint may be downloaded on ArXiV. It is yet to be peer-evaluated.