Will listening in noise be solved by this "brain-reading" hearing aid?

 

intelligibility

©Zuckerman Institute

The long-unsolved "cocktail party" problem of how to listen to a specific speaker in noisy surroundings may have a neuroengineered solution: a brain-controlled AI hearing aid developed in a Columbia University lab.

The key to the successful conversion of this idea into a mainstream, non-invasive device—perhaps in less than five years from now—is the development of a neural network artificial intelligence (AI) model that would monitor the brain itself, tracking its pinpointing of one sound source among multiple sound sources, for example the friend talking to you at a party. Dr. Nima Mesgarani, of Columbia University's Zuckerman Institute (New York) , monitored people's brain activity as they listened to simultaneous voices, confirming that the brain tracks only the voice it is focusing on.

Thus far, making use of the brain's own tracking has depended on an invasive use of electrodes implanted into the brain. An attempt in 2017 needed the brain to be pre-trained to pick out voices, but the latest device tested works with voices the brain has not heard before. Mesgarani plans to move this forward through non-invasive technology attached to the skin, and a new microphone that filters different voices by using AI based loosely on neurons. In practice, the hearing device will track the brain to see who (or where) it is listening to, then amplify that source.

Mesgarani, principal investigator at the Zuckerman Institute, says his team is looking at a five-year period to make this concept an everyday solution for persons with hearing loss. His lab has published a "short animation on YouTube "to explain the idea.

Source: The Guardian

P.W.