An article, published Tuesday in the journal Scientific Reports, details how the team at Columbia University’s Zuckerman Mind Brain Behavior Institute used deep-learning algorithms and the same type of tech that powers devices like Apple’s Siri and the Amazon Echo to turn thought into “accurate and intelligible reconstructed speech.” The research was reported earlier this month but the journal article goes into far greater depth.
The human-computer framework could eventually provide patients who have lost the ability to speak an opportunity to use their thoughts to verbally communicate via a synthesized robotic voice.
Read More.. Source CNET