You might be able to type with your thoughts

To many, the concept of typing with your thoughts might sound like a fantasy, but for the past couple of years, Facebook has been funding research. And it’s not in vain.

This week, the researchers of Facebook provided an update on its progress— an ambition that some might think has the ability to extend the extensive social networking tentacles of Facebook to the inside of our own minds.

After latest studies on human volunteers at the University of California, San Francisco (UCSF), the company’s strategy for a non-invasive, wearable, brain-reading computer interface is making progress. Ultimately, it could enable those who have lost the capacity to vocalize words to interact through their thoughts in real-time instead, giving them a whole fresh lease of life.

An update on the research released in Nature on Tuesday, July 30, shows that the Facebook-backed engineering team has been able to create so-called “voice decoders” that can understand what a individual intends to say by analyzing their brain signals.

“Currently, patients with speech loss due to paralysis are limited to spelling words out very slowly using residual eye movements or muscle twitches to control a computer interface,” neuroscientist Eddie Chang, who is working on studies, said in a release. “But in many cases, information needed to produce fluent speech is still there in their brains.” The technology being developed will allow them to express it.

Together with postdoctoral investigator David Moses, Chang’s team performed research using electrodes implanted into the brains of three volunteers at the UCSF Epilepsy Center to achieve its objective of developing an efficient and safe brain-computer interface.

Experiments were aimed at creating a technique to identify the spoken answers of the volunteers based solely on their brain activity. The scientists achieved a point after much effort where they could see — on a PC screen— a word or sentence obtained from brain activity as it was spoken by the participant.

However, at the present moment, the technology can only acknowledge a very restricted amount of phrases, but Moses said that in future studies “we hope to increase the flexibility as well as the accuracy of what we can translate from brain activity.”

Chang said that his laboratory “was mainly interested in fundamental questions about how brain circuits interpret and produce speech,” adding, “With the advances we’ve seen in the field over the past decade, it became clear that we might be able to leverage these discoveries to help patients with speech loss, which is one of the most devastating consequences of neurological damage.”

If you found the newsroom and insights pages useful then feel free to subscribe to our newsletter and get the latest in your inbox.