The future of hearing technology lies in GETTING CLOSER TO THE USER

BIHIMA interviewed Sebastian Best, Head of Sound and Fitting in the Research and Development Department of Signia. Having worked for the company for almost 15 years, Sebastian has a wealth of experience in hearing technology development and discusses how processing strategies help people with hearing loss.

Published on 29 November 2022

The future of hearing technology lies in GETTING CLOSER TO THE USER

By Liz Pusey

BIHIMA: Can you tell us about what you do at Signia?

Sebastian Best (SB): For many of us, hearing well is part of day-to-day life, and people with hearing loss shouldn’t have to take extra measures to have the same experience. In my team we work with the belief that technology should bring people back into a situation where they can experience the world and we develop products and systems to support that.

I’ve been working in this field for 15 years and my fascination with hearing technology does not stop, because there is still so much for us to learn about our own abilities and how that can be translated into technology development. If you have any degree of hearing loss you are affected every day and it’s really motivating to know that the work we do makes a difference to someone’s life.

 

BIHIMA: What do you think are the common issues people experience wearing hearing instruments in noisy environments?

SB: For most people, this is the most demanding situation to manage with hearing loss and I would say it’s the reason many are prompted to buy a hearing instrument or see a healthcare professional because in a noisy situation the difficulties of hearing loss are magnified. As an industry, we have made a lot of improvements already but in a limited number of scenarios. Previously, we might have looked at a situation, identified the noise source and the direction of sound, and try and make how you hear this better than before.

Nowadays, we know people still struggle because there are so many different noise environments to consider. We must focus technology development on the chaotic acoustic environment because that is the most difficult for people to experience. If you think about being in a city where there is traffic noise and different sounds all around you, and perhaps you are walking with someone who is talking to you, but they might step behind you because you are on a busy street and then come back alongside again. This type of scenario is the most relevant for us because it is so complex, the noise and speech is coming from all directions and is moving all the time. There are a number of procedures where some signal processes might ordinarily work but in this scenario, they could fail because of the number of competing sounds.

We must then look beyond the noise environment at other components. Generally, sound provides an awareness of what is happening around us, but hearing sound does not mean we have understood it, so we need to enable a person to know where a sound is coming from and what it might mean. Someone might also want to be able to focus on a particular sound, like the conversation of the person you are with for example, and to be able to speak themselves. If you look at the auditory system of a human there are very complex processes in place to make all these things possible, so we must take that same holistic view and ensure we have considered every part of the puzzle.

 

ABOUT BIHIMA:

 

BIHIMA represents the hearing instrument manufacturers of Britain and Ireland, working in partnership with other professional, trade, regulatory and consumer organisations within the health care and charitable sectors. We raise consumer awareness about the latest hearing technology and aim to influence government and policy makers to improve the lives of people with hearing difficulties.

 

BIHIMA: How do you think your technology can assist people in dealing with these issues?

SB: There are different pillars to this – the first is the belief that the more we know about a scenario, the better we can support the processing. We must really understand and analyse a person’s needs and how they might behave in different situations for the technology to work.

Another pillar is knowing how to process different input sounds based on that knowledge. For example, we are super happy with the split processing we have on our Augmented Experience (AX) platform because people can hear sounds independently, which is especially important in chaotic situations.

This technology can use different strategies at the same time for sounds from different directions, to give the user the best experience, helping people to shape the sound of their surroundings through direction-dependant processing. That means e.g. a speech signal coming from one direction can be enhanced in clarity and precision, while other sounds in the surrounding can be made to sound smoother and a little bit attenuated. You could think of a sound engineer that’s always assisting you with the best mix of the sounds around you.

This is what we mean when we talk about the augmentation of sound, because we are enabling people to access different parts of the sound around them, by separating them in the processing.

Another important topic, technology must also take care of the sound of one’s own voice to enable comfortable conversations. Signia products have own voice detection, so they know when the user is speaking, and this works particularly well with split processing. We can apply different settings to a person’s voice to improve sensory processing to ensure people can feel confident in speaking as well as listening in noisy situations.

What we want is that people with hearing loss can experience all types of sound in the same way as anyone else. We do have an app for the platform where someone can decide they want to focus on one particular sound direction if it’s important to hear speech, for example, but if we really want to provide an effective solution for people with hearing loss …we must intelligently use data about users’ hearing loss and their behaviours, actions, and preferences in varying environments. Thereby we can create solutions that will automatically update and be perfected in the augmentation of the individual hearing, much like AX does.

Signia      Sebastian Best: “With the introduction of intelligent hearing systems, the foundation of what the hearing aid does will go well beyond the hearing loss.”

 

BIHIMA: What is a processing strategy, and how does it help someone with hearing loss?

SB: There are many aspects to processing strategies and though we can talk at length about the details of specific elements like fast or slow compression, what I think is more important is how we use a processing strategy, which is a method of improving how you experience a sound, to give you the right support. People will have different levels of hearing loss and difficulties with different situation or types of sound. There are some people who might need help understanding a particular sound and a direction microphone or noise reduction would help them, and there are others who can cope with these sounds already and need to improve their awareness of sound instead. Understanding the level of hearing loss and the level of need is vital to know what types of support, or processing strategies, they will need.

Based on our experience of a great many studies and tests, we can use a combination of processing strategies that provide a starting point where we are confident that most people would see a benefit. As a person uses the instrument and understands where they see improvements or might still have trouble with environments or sounds, we can use this information to develop tweaks and strategies that will deliver further improvements in the future.

 

BIHIMA: Is there anything you think will drive further improvements in hearing technology in the future?

SB: The future is definitely in object-based thinking, which is an approach we already take. You must consider every different sound as an object and treat each of them specifically when developing processing strategies. We also need good tools to allow the right adjustment, so you may fit a hearing instrument in a shop but when the person goes to a restaurant they do not hear as well because the environment has changed and we must be able to adjust our thinking based on these experiences.

Signia      “It [Signia Assistant] learns from every interaction and stores data on the performance of a hearing instrument and what adjustments have been made, and the healthcare professional can use this information to better understand that person’s needs.”

 

I really believe in technologies like the Signia Assistant, where we empower the user to interact with our technology. It learns from every interaction and stores data on the performance of a hearing instrument and what adjustments have been made, and the healthcare professional can use this information to better understand that person’s needs and adjust settings accordingly. A study showed that 93% of Signia Assistant users see the system as a meaningful innovation that raises their satisfaction with their hearing instruments in difficult listening situations.

With the introduction of intelligent hearing systems, the foundation of what the hearing aid does will go well beyond the hearing loss. Data about user behaviours, preferences and environments opens the solution space and will allow HCPs to elevate the degree of personalisation during fitting and fine tuning visits. The consumer will have the opportunity to continue this journey in real life, and through an eco-system of data exchange consumers will experience better and augmented hearing, guided by their own preferences and the professional assistance of the HCP. This way of working is the key to ensuring we can meet the needs of every user as an individual and have a positive impact of their experience of the world.

Source: Audio Infos UK issue 151 November-December 2022

Liz Pusey