Advanced AI: How hearing instruments are using ‘deep neural networks’

 

technology

In the first 2021 technology article compiled by the British Irish Hearing Instrument Manufacturers Association (BIHIMA) for the UK audiology magazine Audio Infos, Oticon’s Thomas Behrens is interviewed on the subject of deep neural networks and how they can improve the sound experience of hearing devices.

Advanced AI: How hearing instruments are using ‘deep neural networks’

With a vaccination programme underway, patients can begin to access much needed hearing care and benefit from the technology advances this industry continues to deliver with speed. BIHIMA explores some of these latest advances and how patients will benefit. Thomas Behrens, Senior Director of Oticon’s Centre for Applied Audiology Research, responds on his company's work in artificial intelligence (AI).

BIHIMA: What technology advances are you focusing on at the moment?

Thomas Behrens: We know so much more about the auditory centre in the brain than before and this has influenced our technology dramatically. We now know that the brain orients first and creates a full ‘sound scene’, it then identifies what it wants to attend to, so it can focus in. We have optimised our technology to reflect these insights. This is not a new journey; we’ve been working on it for ten years now, reinventing the technology. Firstly, replacing directionality with open sound, then feedback management with feedback prevention. And finally, now, we are replacing noise reduction with the most advanced AI networks: deep neural networks. This has been our recent focus.

BIHIMA: Tell us more about deep neural networks.

TB: Previous technology has a one-sided focus on speech. Which brought a lot of new benefits initially, but now needs replacing with technology that allows the brain to be used more naturally. We want our technology to allow the brain to form full sound scenes. So, we asked our new AI to provide these full sound scenes. To do this the AI needed access to more detail. The hearing aids we are now using give more contrast, they register more detail, so users can hear all the smaller sounds that ensure an experience of a full sound scene. They not only focus on voices but also perceive the background sounds and allow users to switch focus. Which is what we do naturally. In the past, sound processing was done with engineers coming up with mathematical algorithms that could filter and prioritise one sound over another. But these algorithms were limited by our ability to see differences between sounds. This is something computer-assisted learning systems such as deep learning are really good at. They can find patterns. In the past a hearing instrument was all about the algorithms. What is important now is the library of sounds AND the learning algorithm, which updates the deep neural Network during the learning phase to ensure it is integrating the pattern recognition.

˝The hearing aid learns in the same way we learn as humans. i.e. when we sleep our brain learns from all the experiences heard in that day. Similarly, the deep learning algorithm integrates learnings into the deep neural network and then assimilates them.˝


 

BIHIMA: How do you harness this new AI in a hearing instrument?

TB: We put this deep neural network on board the hearing aid in a powerful chip called Polaris. Cloud computing is inherently slow (transferring sound to the cloud and back to the instrument is not instantaneous) so we needed to put the technology on board and then find a way to optimise it. Today our instruments are 16 times more powerful than they were just two years ago, which gives us the computing power needed for the deep neural network.

BIHIMA: What are the benefits of this new technology for the user?

TB: In summary, we use special spherical microphones to capture high resolution recordings and create a library of sound. We expose the deep neural networks to all the sounds in the library of recorded sound scenes – in Oticon’s case, 12 million sound scenes – and integrate the learnings, which is modelled on the structure of our brains. The hearing aid learns in the same way we learn as humans. i.e. when we sleep our brain learns from all the experiences heard in that day. Similarly, the deep learning algorithm integrates learnings into the deep neural network and then assimilates them. In this way we enable users to hear so much more than we could in the past. The ultimate goal is to be able to continue optimisation with each user. That’s one of the next steps in technology development. This technology holds a lot of promise.

© Oticon      A simple enough community gathering becomes a stern test for someone with hearing loss, and involves multiple sound scenes to be learned.

BIHIMA: How have you tested this new technology?

TB: We wanted to understand if our technology has improved the brain's capability. We do EEG studies to register the auditory activity in the brain. We tested our new hearing aid with 30 people in a restaurant type situation, recording two people talking in a background of noise. Then we compared the signature of each sound a person is listening to in the brain to see the sounds the brain is reconstructing. When we compared this to our previous hearing aids, we saw that we are now providing 30% more information to the brain. We can see that by making 30% more information available to the brain, people were not only able to hear more, but also understood 15% more speech in a restaurant type situation. We’ve conducted four studies on the new technology so far and published our findings.

BIHIMA: How do we ensure people can access this new technology as the world starts to open up?

TB: We think the pandemic will accelerate some developments that were already happening, in particular remote care. Or what we call ‘blended care’, where the clinic and remote approaches blend together. This will accelerate all sorts of technology and will help hearing care professionals access people with hearing impairment in new ways. Because of lockdown we have unfortunately seen hearing services stopped, especially ENT which is considered a tricky area to deal with safely. This means all the hearing problems that continue to happen have got much worse. So, we have to accelerate our focus, not only in adopting new technology, but also developing technology that will connect users with audiologists in new ways.

˝We must continue to roll out remote care and create the omnipresent clinic, one that is centred around the audiologist.˝

BIHIMA: Do we need to reconsider the way people are accessing support and receiving hearing aids in the future?

TB: Yes, we must continue to roll out remote care and create the omnipresent clinic, one that is centred around the audiologist. In all the analysis we do, we see that great hearing care can only be delivered with an audiologist's support. The process requires at least one face-to-face visit. After this, remote care can be implemented for the follow-on appointments. As technology builds, audiologists can also get a much better picture of how hearing aids are used, and they can deliver better remote support as a result e.g. fine tuning the hearing aids remotely. We will learn new ways of providing better service and support in different situations. We have added more tools to help audiologists service patients. Good audio and video connection is essential and as many clinic-based tools as possible need to be made available remotely. Some are trickier, for example relying on putting a tube with a microphone inside an ear to measure sound output as done in hearing aid verification. This is difficult to replicate in a remote setting, so there is more work to be done to create alternative tools to do this. Not only can we use these remote solutions, but we need the technology in hearing aids to be able to adapt to the new situations we face. For example, when wearing a mask one loses a lot of the consonants in speech, so it is even more important to have strong noise handling for the speech sounds to become as clear as possible. It is essential we keep innovating and develop technology that is more flexible to handle the current circumstances.  

BIHIMA: Any final thoughts on how technology will help us during Coronavirus?

TB: Connected hearing care has never been more important. Hearing aids that have wireless capability and streaming connections are essential so people can be connected to their loved ones. Our world now is a lot less noisy, and that’s great for those with hearing impairment, yet it’s never been more challenging when you need to be distanced. Connecting online, with bad internet and audio connections, is difficult to navigate. And if you can’t see the person, you lose so much of the communication. It’s not an easy time for people. So, it’s important as manufacturers that we help audiologists provide their essential services in an easy way, to help as many people as possible access care and overcome these challenges.

Source: BIHIMA/Oticon

Olivia Hill