- Published on 30 March 2017
In the second half of the 19th century, a variety of audiometers were invented. These early audiometers were known as ‘induction-coil audiometers’ – their invention followed the development of the induction coil in 1849 and audio transducers in 1876.
In 1885, Arthur Hartmann designed an ‘Auditory Chart’, which included left and right ear tuning fork representation. In 1899, Carl Seashore introduced the audiometer as an instrument to measure the ‘keenness of hearing’ whether in the laboratory, schoolroom, or office of the psychologist or aurist. The instrument operated on a battery and presented a tone or a click; it had an attenuator set in a scale of 40 steps. Max Wien conceived the concept of a frequency versus sensitivity (amplitude) audiogram plot of human hearing sensitivity in 1903.
There is an additional reason why the use a PROM in audiological practice is warranted: it is now widely known that in order to process degraded speech, a listener needs to exploit cognitive resources to understand the message. Someone’s cognitive abilities therefore affect someone’s ability to understand speech. The pure-tone audiogram reveals the state of a person’s peripheral hearing, but does not take the cognitive aspects of hearing into account. A PROM does because an individual’s cognitive skills are automatically incorporated in the self-reported hearing status.”
Since 1919 vacuum tubes have been used in electronic audio devices. In 1922 otolaryngologist Edmund Fowler and physicists Harvey Fletcher and Robert Wegel first employed frequency at octave intervals plotted along the abscissa and intensity downward along the ordinate as a degree of hearing loss.
With further technological advances, bone conduction testing capabilities became a standard component of audiometers. In 1967, Sohmer and Feinmesser were the first to publish ABRs recorded with surface electrodes in humans which showed that cochlear potentials could be obtained non-invasively. In 1978, David Kemp reported that sound energy produced by the ear could be detected in the ear canal. The first commercial system for detecting and measuring OAEs was produced in 1988.
The etymology of audiometry
The word ‘audiometry’ combines the Latin verbs ‘audire’ (to hear) and ‘metria’ (to measure). Audiometric tests determine a person’s hearing levels with the help of an audiometer, but also measure the ability to discriminate between different sound intensities, recognize pitch, or distinguish speech from background noise. Acoustic reflex and otoacoustic emissions may also be measured. Results of audiometric tests are used to diagnose hearing loss or diseases of the ear.
Subjective or objective?
Subjective audiometry: Subjective audiometry requires the cooperation of the subject and relies upon subjective responses which may be both qualitative and quantitative. It includes various kinds of testing, such as: differential testing, pure tone audiometry, Threshold Equalizing Noise (TEN) tests, Masking Level Difference (MLD) tests, Psychoacoustic (or Psychophysical) Tuning Curve tests, Békésy audiometry and speech audiometry. Speech audiometry, a diagnostic hearing test designed to test word or speech recognition, has become a fundamental tool in hearing-loss assessment.
Objective audiometry: Objective audiometry is based on physical, acoustic or electrophysiologic measurements and does not depend on the cooperation or subjective responses of the subject. Examples of objective audiometry are caloric stimulation/ reflex tests, electronystagmography and acoustic immittance audiometry. Immittance audiometry evaluates middle ear function by static immittance, tympanometry, and the measurement of acoustic reflex threshold sensitivity. Other examples of objective audiometry are evoked potential audiometry (CAEP audiometry, ABR and ASSR audiometry), otoacoustic emission audiometry and in situ audiometry.