- Published on 21 August 2017
Researchers from Columbia University School of Engineering and Applied Science, New York (USA), have used deep neural network models to study auditory attention decoding methods and have come closer to making cognitively controlled hearing aids a reality.
The commonly cited “cocktail party problem” has been a major challenge for the hearing aid industry from the start. Although today’s hearing aids can reduce or even remove background noise, they cannot help a user to listen to a single conversation because they do not know which speaker to focus on. Identifying brain activity to determine whether a subject is conversing with a specific speaker would be very useful in assistive hearing for those with a hearing impairment.
The team of researchers carried out auditory attention decoding (AAD) studies in six subjects who were undergoing clinical treatment for epilepsy, and their findings were published recently in the Journal of Neural Engineering.
Nima Mesgarani, associate professor of electrical engineering, and his team developed an end-to-end system that receives a single audio channel containing a mixture of speakers, along with the listener’s neural signals. The system then automatically separates the individual speakers in the mixture and determines which speaker is being listened to. This then makes it possible to amplify only the attended speaker’s voice, explains ScienceDaily.
Source: ScienceDaily; O’Sullivan J, et al. Neural decoding of attentional selection in multi-speaker environments without access to clean sources. Journal of Neural Engineering. 2017 Aug 4;14(5):056001.