TESTING IN WONDERLAND!
There is a bigger big picture for hearing testing, but you need a cool Caterpillar or Cheshire Cat to guide you through the paracosm and towards real, desirable verities of sound. We found just the Cat, down a rabbit hole and with one foot in audiology.
So, did you hear the one about the person with hearing loss who lost their job at NASA?
“I misunderstood when they said it was lunchtime.”
The joke will hit home with anyone who has any significant hearing diffculties, affirms Randy Abrams, whose own loss is a challenge he meets with hearing aids, and an imaginative intellect honed as Senior Security Analyst Emeritus for SecureIQLab, Texas, and, over decades, as a speaker and delegate at computer security conferences.

©RA
Randy Abrams
Thinking beyond what there is and into the far-flung territory of what might be, Randy posits a testing range that goes beyond the merely diagnostic and that could offer users measurement of the real-world impact of their loss, as well as clues toward a richer set of results by which hearing care professionals and manufacturers could design more tailored solutions.
Of course, there is always a but: practicality. The rare counter-but in this case is that the practicalities, such as the economics involved in carrying out a matrix of 729, or thousands of different tests, may have a powerful fixer-upper coming over the horizon: AI.
So, why does a computer security whizz get interested in hearing testing in the first place?
Experiencing partial sensory loss in his fingertips because of multiple sclerosis (MS), Randy was faced with the radical decline of his typing speed. He is a quick thinker, and his field requires great efficiency, so his frustration inevitably increased; as if he needed this on top of a tremor, ADHD, and hearing loss! So he applied in the US for Social Security Disability Insurance (SSDI).
“When the state ordered a hearing exam as part of the SSDI qualification process, I told the ENT that I have a really hard time with foreign accents. His response was that everyone does,” explained Randy.
“That pissed me off because it was dismissive, and I knew that problems with accents were more pronounced for the hearing impaired. Additionally, the ENT’s report to the state said my hearing was well-corrected. Of course, I knew that was wrong. That is what prompted my limited research into testing, which also led to me learning more about hearing loss.”
Getting his own QuickSIN test done – this is not one the state orders – Randy also learned about HINT, SIN, and a few other tests.
“The results were that for me to understand speech in noise, my right ear requires speech to be 11 dB louder than the background noise and, for my left ear, 26 dB louder. In some environments a person has to talk as loud as a lawnmower for me to understand what they’re saying. For people with normal hearing, the signal to noise ratio is 1 to 2 dB.
Problems with hearing accents
c SIphotography – iStock
“Foreign and even domestic accents play a huge role in the real-world impact of both those with normal hearing.”
In addition to learning about speech-in-noise tests, Randy got confirmation from qualified sources that people with hearing loss have more difficulty with accents and, therefore, use more cognitive resources simply understanding speech. This, in turn, is believed to reduce cognitive resources required for comprehension of the information.
“The ENT probably reached the conclusion that my hearing is well corrected by using the extraordinarily inadequate testing that is currently available, combined with a lack of situational awareness. In my field I work with more people who are not native English speakers than those who are. This is a situational impact that testing alone cannot address.”
The failings of current hearing testing
Current hearing tests leave a lot to be desired. Typically, the real-world impact of hearing loss is not identified due to deficiencies in testing. As we have highlighted in articles on these pages, even tests like the QuickSIN, HINT, and other speech-in-noise tests that better approximate real-world impact are underused and highly limited in scope.
“There were a couple of funny things in the two QuickSIN tests I’ve taken. In the first one at one point when I had to repeat what I heard, I told the audiologist that I know that what I heard wasn’t what was said. I heard “drop acid.” We both had a good laugh,” recounted Randy.
” At the next one I told the audiologist that I was sure that what I heard wasn’t what was said. I heard ‘sex’. The audiologist told me that wasn’t the first time that week that someone said the same thing. When I heard “f*#k” I simply said “I’m not repeating that one.”
Standard hearing tests are primarily diagnostic and fail to measure real-world impairment in the hearing-impaired community, Randy steadfastly maintains. While these tests are used to help fit hearing aids for maximum benefit, and to identify hearing loss and its causes, they fail to offer much in terms of identifying real-world impact.
© Getty Images – David Gunn – djgunner
Audiologists may forgo these tests – such as HINT and QuickSIN – because they mean an aggregate hit on both profitability and patient wait times. Nevertheless, some audiologists recognise their importance and opt to include them.
Common tests such as pure-tone audiometry, speech discrimination testing, and conduction tests also fail to identify conditions like diplacusis (double-hearing). This is a significant oversight, as diplacusis can affect professionals differently depending on their field of work, argues Randy. “For instance, musicians may suffer more pronounced consequences than others. Despite being simple and cost-effective to diagnose-requiring only a basic pitch comparison test-it remains largely untested, making treatment virtually impossible” .
“A dismissive ENT was why I did my research into hearing loss. I love the guy. If I did it all over again I wouldn’t want him to change a thing. The research was valuable for me personally to far better understand how and why certain aspects of my life are impacted by hearing loss, regardless of the use in approval for my SSDI claim,” sums up Randy.
The fantasy of testing if there were no limitations
Suspend for a moment all thoughts of the restrictions; we are in Wonderland. The restrictions don’t go away, but make them vanish temporarily, enough to consider whether, with more comprehensive testing, it would be possible to obtain a quantifiable hearing impairment test score based upon the combined results of a fantastic battery of tests, perhaps combining results of tests for cadence impact, volume, pitch, speech discrimination, accent, etc. And the outcome would be a comprehensive impact score.
It is an ideal world in which hearing tests would provide a comprehensive understanding of an individual’s auditory processing capabilities. Testing would extend beyond simply detecting hearing loss to assessing real-world listening difficulties, including issues like diplacusis, speech-in-noise comprehension, and cognitive auditory processing. It would also account for factors such as pitch, timbre, enunciation, cadence, accent, and context-all of which influence real-world hearing challenges.
Q. What would a limitless, or near-limitless testing system involve?
A. Multiple speakers at different volumes and frequencies to create an accurate simulation of real-world listening. It would also incorporate contextual comprehension, evaluating how well individuals can infer missing speech elements based on surrounding words.
Getty Images – pixelparticle
We are now talking about the complexity of testing each variable independently. The benefits are mind-blowing. Helping in areas such as disability determinations and career impact assessments would be just two useful applications. Think of what it would provide in terms of psychological relief to patients by validating their struggles and explaining why certain listening situations are more challenging than others.
The parts of this complexity have their own sparkle. Take enunciation, a ” biggie” says Randy. People with normal hearing understand how hard it is to understand a person with poor enunciation. For one with hearing loss, the impact of enunciation on speech intelligibility is even greater. Enunciation is not part of standard testing, yet it plays a huge factor in the magnitude of impact for the hearing impaired.
“Cadence is also huge,” stresses our Cheshire Cat. How fast does someone speak? In common testing, cadence is designed to be good. Both sentences and multi-syllabic words are spoken at specific cadences that seem to be fairly well optimised for speech understanding. “Despite the enormous impact that cadence plays in speech comprehension for the hearing impaired, it really isn’t tested,” claims Randy.
Then there is context, also ” huge” . It really doesn’t matter how poor your hearing is, if you can interpolate from context, the impact of hearing loss in that situation is insignificant.
“Single word tests are not optimised for measuring hearing impairment because context is missing. Hearing impairment is more than just what you didn’t hear, it is affected by context. Context allows interpolation. If I missed a word you said, but due to context I can figure it out, fundamentally the impact of not hearing the word can be zero. Of course, sometimes two different words can be contextually correct, but hearing the wrong word may destroy contextual interpolation. Honestly. I’m not sure if testing involving sentences are factoring in contextual angles, but the impact of context cannot be overstated,” he adds.
Seven key areas impacting on speech intelligibility where testing fears to tread
1. Volume: Traditional hearing tests, such as pure-tone audiometry, assess an individual’s ability to hear sounds at various volumes and pitches. These tests measure the softest sounds a person can detect at different frequencies, which correspond to pitch. They may not fully capture real-world listening challenges.
Randy Abrams: “Of all of these attributes, volume is the most effectively tested, but still falls short of attempting to quantify impairment. The reason for this is that volume does not exist all by itself. All of the attributes of hearing are inter-related. Many people lower the volume of their voices as they reach the end of a sentence. I would expect that at some point in the future hearing aids will be able to compensate for this. But in order to best do so it’s going to take data that currently isn’t collected to best create the proper algorithms.
2. Pitch: The pitches of different people’s voices is totally ignored in common testing. One, or maybe two voices at most are used in common testing. For those with high frequency hearing loss, a higher voice, such as a female voice, may be more difficult to understand than a deeper voice. Pitches matter.
3. Timbre: Timbre refers to the quality or colour of a sound that distinguishes different types of sound production, such as voices or musical instruments. Standard hearing assessments do not typically evaluate a person’s ability to discern timbre, which can be crucial in understanding speech nuances and musical appreciation.
4. & 5. Enunciation and Cadence: Enunciation involves the clarity of speech sounds, while cadence refers to the rhythmic flow of speech. Some speech-in-noise tests aim to assess how well individuals understand speech with varying enunciation and cadence in noisy environments. However, these aspects are not comprehensively evaluated in standard hearing tests.
6. Accent: The ability to understand different accents can be challenging for individuals with hearing impairment. Standard hearing tests do not typically assess comprehension across various accents, which can be a significant factor in real-world communication.
7. Context: Understanding speech in context involves cognitive processing and the ability to use contextual clues to fill in gaps, especially in challenging listening environments. While some advanced tests attempt to simulate real-world scenarios, standard assessments may not fully capture this aspect.
How many tests is that?
Time for some maths before it all runs away from us: if we want to test people speaking at three different frequencies and three different levels of volume, now we have a test matrix of nine tests. Given equal time spent on each test, you’ve gone from two minutes of testing time to 18 minutes. Add three different types of enunciation to the matrix and you now have 81 tests in the matrix. Add three different cadences and we go from 81 tests to 243 tests.
And here we come to accents. Foreign and even domestic accents play a huge role in the real-world impact of both those with normal hearing, but the impact can be far more pronounced for the hearing impaired. It wasn’t merely because of its Edinburgh Scots dialect that the film Trainspotting played in the USA with subtitles; the accent was too difficult for normal-hearing audiences. So, how many types of accents do we want in the test matrix? ” Let’s say three,” says Randy, ” because I can do that math”. Our test matrix is now at 729 discrete tests.
What Are the Limitations to Testing?
Back in the swampland of reality, several obstacles make it difficult to implement such comprehensive testing:
Time constraints: Testing for a broad range of factors, such as pitch perception across different frequencies, would require a substantial increase in testing time. A simple two-minute speech test could expand to over an hour when incorporating variables like different speaking cadences, accents, and enunciations.
“As my current audiologist explained to me,” says Randy “even though the test only takes a couple of minutes, it adds up and can affect how many patients a provider can see in a day. This isn’t only about profits; it affects how long it takes for a patient to see a provider. Still, my audiologist understood that the test is too important to eliminate.”
Cost and Accessibility: More comprehensive testing would demand additional resources and specialised equipment, making it cost-prohibitive for many clinics.
Clinical Adoption: Many hearing professionals already neglect best practices such as real-ear measurement and speech-in-noise testing. Unless audiology as a field were to embrace more thorough testing protocols, widespread adoption is not going to happen.
Data Interpretation: Gathering extensive auditory data is one thing, but making sense of it and determining actionable outcomes is another challenge entirely.
Let’s count again: how many tests?
Volume, pitch, timbre, enunciation, cadence, accent, and context add up to seven attributes of speech that impact intelligibility. “A relatively small test matrix could easily result in over 10,000 test cases,” Randy Abrams calculates. ” A comprehensive matrix could run into the millions of test cases. While millions of test cases are not feasible it doesn’t mean that progress can’t be made.”
Huh? How is it happening?
“Despite these barriers, advancements in AI offer a potential pathway to overcoming these limitations. AI could play a crucial role in reducing the test matrix by identifying patterns and correlations among these factors.”
A practical approach to testing hearing in real-world situations using AI
AI has the potential to revolutionise hearing tests by analysing vast amounts of auditory data and identifying key patterns. Instead of requiring a vast test matrix with millions of variations, AI could streamline the process by determining which combinations of test factors are most critical.
Leveraging AI could mean hearing assessments simulating real-world environments to evaluate how individuals process speech amid background noise, reverberation, and multiple voices.
NUniverse – iStock
“In some environments a person has to talk as loud as a lawnmower for me to understand what they’re saying”.
AI could also incorporate cognitive and neural processing assessments to differentiate between peripheral hearing loss and central auditory processing difficulties.
And it could customise tests to patients’ specific listening environments, ensuring more personalised hearing solutions.
Furthermore, AI could enable patients to perform some hearing assessments at home. By shifting basic tests outside of the clinic, audiologists could collect richer data while mitigating economic and time constraints. AI-driven results could then be analysed by professionals, ensuring that clinical expertise remains integral to treatment decisions.
Unblurring the diplacusis situation, AI could assist in detecting and managing pitch discrepancies by analysing frequency perception and applying real-time audio corrections via hearing aids. The ability to detect interaural pitch differences could lead to targeted treatments for musicians and others significantly affected by diplacusis.
While comprehensive testing remains infeasible today, our guide believes AI provides a promising solution for making real-world hearing assessments more practical, actionable, and cost-effective. With proper research and industry collaboration, AI-driven tests could significantly enhance diagnostic capabilities, benefiting both patients and hearing professionals alike.
A holistic-size real-world gap to fill
While traditional hearing tests address certain elements like volume and pitch, they lack the scope and detail to measure the real-world auditory experiences involving timbre, enunciation, cadence, accent, and context. This gap highlights the need for more holistic approaches in hearing assessments to better understand and address individual hearing challenges.
But if the willing is not there on the part of the profession driving audiologists, or the manufacturers of hearing aids and testing equipment, the gap is not going to be filled and truly personalised treatment will remain in Wonderland.