WASHINGTON (AFP) – NASA has developed a computer program that comes close to reading thoughts not yet spoken, by analyzing nerve commands to the throat. It says the breakthrough holds promise for astronauts and the handicapped.
A person using the subvocal system thinks of phrases and talks to himself so quietly it cannot be heard, but the tongue and vocal cords do receive speech signals from the brain,” said developer Chuck Jorgensen, of NASA’s Ames Research Center, Moffett Field, California.

Jorgensen’s team found that sensors under the chin and one each side of the Adam’s apple pick up the brain’s commands to the speech organs, allowing the subauditory, or “silent speech” to be captured.

The team concluded that the method could be useful on space missions or other difficult working conditions, such as air traffic control towers and even to make current voice-recognition software more active.

“What is analyzed is silent, or subauditory, speech, such as when a person silently reads or talks to himself,” Jorgensen said.

“Biological signals arise when reading or speaking to oneself with or without actual lip or facial movement.”

On early trials, the program could recognize with 92 percent accuracy six words and 10 numbers that the team repeated sub-vocally.

The first words were “stop,” “go,” “left,” “right,” “alpha,” and “omega.”

Then, the inventors gave each letter of the alphabet a set of digital coordinates.

“We took the alphabet and put it into a matrix — like a calendar,” Jorgensen said.

“We numbered the columns and rows and we could identify each letter with a pair of single-digit numbers.

“So we silently spelled out ‘NASA’ and then submitted it to a well-known Web search engine. We electronically numbered the Web pages that came up as search results. We used the numbers again to choose Web pages to examine. This proved we could browse the Web without touching a keyboard.”

The next trial will command a robot similar to the Rovers currently exploring Mars.

“We can have the model Rover go left or right using silently ‘spoken’ words.

“A logical spin-off would be that handicapped persons could use this system for a lot of things,” he said, as well as persons wanting to speak by telephone without being overheard.

To reach that goal, the team plans to build a dictionary of English words recognizable by speech recognition software.

 

The equipment will need improved amplifiers to strengthen the electrical nerve signals, which are now run through noise reduction equipment before they can be analyzed.

“The keys to this system are the sensors, the signal processing and the pattern recognition, and that’s where the scientific meat of what we’re doing resides.” Jorgensen said.