I sent an e-mail today to Chuck Jorgensen of NASA's Ames Research Center about an idea I have that relates to his recent breakthrough using nerve signals from silent (subvocal) speech to communicate with machines and people:
Chuck Jorgensen et al,
Congratulations to you and your team for your work on computerizing subvocal speech. This breakthrough is sure to have wide reaching implications in a number of fields and in the way we interact with our computers. In particular, I'm very excited for what this will mean to the disabled. Moreover, we are slowly but surely inching closer to the NUI (neural user interface), and by implication, a kind of technologically endowed form of person-to-person telepathy.
As I thought about your breakthrough it occurred to me that the same type of system could work for the aural receptors in the ear. The Organ of Corti's stereocilia send incoming converted acoustic signals to the brain. One can assume that, by feeding this transmitter a compatible signal (not unlike the data you're intercepting en route to the vocal chords), that "virtual acoustic" information can be fed directly into the brain. To the person, it would feel like they were hearing sound (it would probably sound like an acoustic hallucination), but in reality, it's just a neural signal that's tapped into the auditory nerve. (I believe that cochlear implants work along this principle)
So, it seems reasonable to me that if you take your subvocal speech signal, convert and transmit it somehow to the auditory nerve, you can have a soft form of mind-to-mind communication.
I wouldn't mind hearing your thoughts on the matter (pun intended). I would be completely unsurprised if you have considered this already.
Dr. Jorgensen was kind enough to write me back:
Dear Mr. Dvorsky,
Thank you for your most interesting and insightful letter. I found your suggested approach a very penetrating insight. We have not considered your particular method but have been interested in finding whether we can directly correlate auditory speech signals and sub vocal signals recorded at the the same time by learning non linear mapping equations to relate one to the other. We have also explored some research for direct neural signal injection performed at other universities but it is outside our Labs current charter and expertise. We are most interested in a totally non invasive process, starting initially with understanding the highly convolved surface measured signals in contrast to the work which has focused on imbedded neural probes or surgical intrusions such as used for highly handicapped patients.
thanks for your interest in our work -
Dr. Chuck Jorgensen
NASA Ames Research Center
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.