A man known only to the public as Pancho had a paper-thin array of electrodes applied directly to the surface of his brain at some point in February 2019 in a surgical suite at the University of California, San Francisco. Since a brainstem stroke in his early twenties, he had been unable to speak due to a condition known as anarthria, which is the total loss of articulate speech. He was able to groan. He was able to grunt. But there were no longer any words in any language. None of that was promised to be fixed by the surgery that day. It offered the opportunity to pay close attention to what his brain was still attempting to convey.
Five years later, that listening resulted in something that appeared in both mainstream and science news feeds: a brain-computer interface that could instantly decode Pancho’s attempted speech in both Spanish and English and show it as text on a screen. Not just one language. Two. The distinction is more significant than it may initially seem, and the explanation of why touches on a genuinely fascinating aspect of how language, identity, and neuroscience interact in ways that scientists are still attempting to fully map.
An artificial neural network, a type of machine learning that was trained on the particular electrical patterns Pancho’s brain produced when he tried to move his vocal tract rather than on text data or internet archives, is the system at the heart of this accomplishment. His motor cortex fired in a pattern specific to the intended articulation each time he attempted to form a word. These patterns were picked up by the AI. They were initially taught in English, and by 2021, Pancho was able to speak that language fairly accurately. However, he spoke English as a second language. His early recollections, his family, and his identity were all intertwined with Spanish. The UCSF team was aware that treating a bilingual patient as if there was only one language was never really a suitable conclusion.
| Category | Details |
|---|---|
| Technology Name | Bilingual Brain-Computer Interface (BCI) / Speech Neuroprosthesis |
| Lead Researcher | Dr. Edward Chang, Neurosurgeon & Co-Director, Center for Neural Engineering and Prostheses |
| Institution | University of California, San Francisco (UCSF) |
| Patient | “Pancho” — paralysed by brainstem stroke at age 20; native Spanish speaker, learned English as adult |
| Implant Type | High-density electrocorticography (ECoG) grid — 120+ electrodes on speech motor cortex surface |
| Implant Date | February 2019 |
| Published Research | Nature Biomedical Engineering, May 20, 2024 |
| Languages Decoded | Spanish and English (simultaneously, user-switchable) |
| AI Accuracy | Approximately 75% in real-time decoding |
| Key Capability | Neural pattern differentiation between two languages; real-time text output |
| Competing Companies | Neuralink, Paradromics, Synchron |
| Future Goal | Decoding inner speech (thought-to-text); 150+ words per minute; universal decoders |

When they took a closer look, they found something truly unexpected. Even twenty years after his stroke, Pancho’s brain continued to exhibit unique cortical activity patterns for both languages—not in distinct regions, but as clearly different neural signatures in the same speech-related areas. This eliminated the need for the team to create and operate two completely different decoding systems concurrently. Instead, they employed a method known as transfer learning to speed up the training of the Spanish decoder using the knowledge they had gained from training the English one. Knowledge of one language helped unlock the other because the underlying neural structures of the two languages were sufficiently similar. Pancho will be able to switch between Spanish and English in the middle of a conversation by 2022 and 2024, depending on personal preference. He was followed by the system.
Reading about how this developed over the course of five years of meticulous work gives the impression that the researchers realized they were doing more than just engineering. Allowing bilingual patients to communicate in their native tongue is not only a clinical courtesy but also a restoration of something profoundly personal that paralysis had taken away, according to Dr. Chang’s team, who made clear the connection between language and identity. Technical papers rarely use that framing, and it wasn’t an accident. Bilingual people make up half of the world’s population. It was never going to be sufficient to design speech restoration technology exclusively for monolinguals.
In real time, the current system achieves about 75% decoding accuracy, which is impressive but, to be honest, still a work in progress. Natural conversation is at least 150 words per minute. The system is not yet in place. Researchers at organizations like Paradromics, which was heading toward human trials with sophisticated high-density electrodes as of early 2026, and Neuralink, which is creating wireless implants targeted at both movement and communication, are building on the foundation this work laid, and future iterations are being designed to close that gap. The breakthrough’s structural logic holds true even though it’s still unclear how quickly accuracy and speed can be increased or whether the transfer learning approach will function as smoothly across other language pairs.
The more difficult tasks are not just technical. Because every user’s neural patterns are different, current systems require a great deal of individual calibration; the AI must be trained for each individual. Researchers are developing what they refer to as universal decoders—models that may eventually be shared by users and require less individual training. The question that tends to unnerve ethicists is this: as these systems advance from decoded attempted speech to decoded inner speech, or the words you just think without trying to say them, the distinction between assistive technology and something that reads your mind begins to blur in ways that no one has yet fully figured out how to handle.
When Neuralink implanted its first chip in a human patient, it garnered much more media attention. That makes sense because Elon Musk is a dependable source of news and the company has big goals. However, the UCSF bilingual implant achieved something more precise and perhaps more immediately significant: it restored the native tongue of a paralyzed man. Not an imitation. It is not a translation. His words. That’s a more limited accomplishment than Neuralink’s long-term goals, but it’s also a real one, with a real person behind it, and it gives you a tangible indication of where this field is headed rather than where it aspires to be.
London Bilingualism's content on health, medicine, and weight loss is solely meant for general educational and informational purposes. This website does not offer any diagnosis, treatment recommendations, or medical advice.
We consistently compile and disseminate the most recent information, findings, and advancements from the medical, health, and weight loss sectors. When content contains opinions, commentary, or viewpoints from professionals, industry leaders, or other people, it is published exactly as it is and reflects those people's opinions rather than London Bilingualism's editorial stance.
We strongly advise all readers to consult a qualified medical professional before acting on any medical, health, dietary, or pharmaceutical information found on this website. Since every person's health situation is different, only a qualified healthcare provider who is familiar with your medical history can offer you advice that is suitable for you.
In a similar vein, any legal, regulatory, or compliance-related information found on this platform is provided solely for informational purposes and should not be used without first obtaining independent legal counsel from a licensed attorney.
You understand and agree that London Bilingualism, its editors, contributors, and affiliated parties are not responsible for any decisions made using the information on this website.
