Have you wondered why you can…
- Hear a pin drop, but cannot understand your spouse?
- Hear a watch tick, but cannot understand the waiter?
- Hear a whisper, but cannot understand in a cocktail party?
- ‘Pass’ a hearing test but have lots of trouble listening to others in noise?
The thing that most people do not recognize is that hearing and listening are not the same. They are quite different. Hearing is ultimately a simple thing. Hearing is nothing more than perceiving or detecting sound.
To measure hearing we plot the quietest sounds you can barely detect, across a spectrum of pitches/frequencies on a chart called an audiogram. Many people believe the audiogram tells us more than it does. However, an audiogram is a simple test of hearing; loudness versus pitch.
Hearing cannot be learned; it is essentially a detection task. To dig deeper into this, imagine if we were to plot the lowest amount of light required to detect colors across the visual spectrum from red/orange to blue/violet. Would that tell us how well you can spell, or read or process visual information such as the three-dimensional world we live in? Of course not. Measuring the threshold of visible light is like measuring the threshold of sound. These measures are interesting and they tell us a little about hearing and vision, but these measures tell us precious little about how the brain interacts with those same thresholds.
Listening is vastly more complicated than hearing and listening is vastly more difficult to assess and measure than hearing. Listening depends on hearing (one must hear before one can listen), thinking, remembering, vocabulary, context, psychological well-being, background noise, cognitive ability, central integration ability (vision, sound, and other sensory information are integrated within the thalamus). As such, listening is a ‘whole brain’ event and importantly, listening is a learned skill. People can learn to listen more effectively through various brain-training protocols and by using signal-enhancing equipment. Learning to listen effectively is often the essence of aural rehabilitation programs for children and adults with auditory processing disorders.
Simple measures of detection are just that, simple, basic, and informative. Just as temperature, heart rate, blood pressure, height and weight are good measures, they are not a complete measure of your physical well-being; they are good starting points which allow us to better understand your overall health status.
Hearing, Listening and the Animal Kingdom
Most (nearly all) dogs, cats, whales, monkeys, gorillas and more, hear better than humans. In fact, many animals can hear much better than humans; they can hear much quieter sounds than humans can hear and most mammals can hear octaves higher and many tones lower, than humans. When we compare humans to most other mammals, normal hearing humans do not hear very well.
Why are humans the top of the food chain?
Humans are the top of the food chain partially due to our ability to listen; to assign meaning to sounds, particularly speech sounds. Humans have developed thousands of languages which assign and convey meaning to speech sounds. It is uniquely human to have sounds (words) which indicate past, present, or future. It is uniquely human to understand and discuss weather, science, sociology, math, biology, sex, drugs, rock’n’roll, physics, airplanes, aeronautics, the solar system, religion, hardware, software, seas, oceans, lakes, and everything else. We can speak about real and unreal things, such as the content of our dreams and our desires and hope. All of these attributes are exclusively human traits. Many other animals hear better than we do, but we are the only ones with vast, intricate languages which allow us to share and convey deep thoughts and meanings.
Human Hearing and Speech Sounds
Fortunately, the human speech production mechanisms (the oral cavity, throat, lips, tongue, larynx…) create sounds from about 30 Hz to about 12,000 Hz (more or less). Fortunately, the human ear can perceive sounds from about 20 Hz to about 20,000 Hz. In typical humans, the sounds we produce and the sounds we perceive/hear are more-or-less in sync, yet (as noted above), they are limited. Most of us perceive about 9 octaves (specifically; 20-40 Hz, 40-80 Hz, 80-160 Hz, 160-320 Hz, 320-640 Hz, 640-1280 Hz, 1280-2560 Hz, 2560-5120 Hz, 5120 to 10,240 Hz).
In English, and most western languages, the vowels are the most powerful sounds and they are produced across the lowest frequencies. The consonants are found in the mid-to- higher frequencies. An old speech sounds rule of thumb is that 80 percent of the loudness (power) comes from vowels, but 80% of the CLARITY comes from the consonants. This is why making sounds louder does not necessarily make them clearer! Unfortunately, most age-related, and noise-related hearing loss initially impacts the higher frequencies (from roughly 3000 to 6000 Hz). As a result, people with the most common hearing losses have a difficult time hearing clearly as they often cannot hear the softer consonants as well as the louder vowels, resulting in speech sounds - which sound like mumbling. People with age-related hearing loss (presbycusis) report they can hear but cannot understand. This is the most common complaint hearing care professionals hear from patients every day.
When we hear a pin drop or a watch tick, or when we detect someone clipping a finger nail, those “click” sounds have minimal loudness and a very tiny spectral/pitch bandwidth. These sounds hardly make a sound at all, and yet we can detect them as they are unique, they are not typical of background noises (like a cocktail party or other speech sounds) and if we are asked to focus on these barely audible sounds, sometimes we can. However, these tiny little “click” sounds are not speech, nor do they correlate with audiograms, and they tell us nothing about the ability to understand speech sounds around us. That is why watch-ticks and whisper tests are essentially meaningless regarding understanding or diagnosing hearing and/or listening ability.
Going back to our visual analogy; some people with very low vision can detect a flashlight, a strobe light, or other visual stimuli, which does not contradict their low vision ability. The fact that most adults can see stars billions of light years away does not mean they can read, or focus properly on a standard newspaper held at normal reading distance. They can see, but they cannot read the letters. Most adults over age 50 years wear glasses to see more clearly, not just to see more.
Further, the above-mentioned click-sounds do not represent even the slightest amount of information which an audiogram reveals, and as stated above, audiograms cannot and do not describe listening ability in the real world.
People do not live in a world where watches, whispers or nails being clipped are the primary speech sound. Repeating an uncalibrated whisper from 12 inches or 36 inches or across a room tells us just about nothing. Conversely, if one cannot hear a whisper from 3 feet away, what would that tell us? Just about nothing. There are too many intervening and uncontrolled variables to make a clear or concise statement. Consider having a small bit of information like a shoe size and try to relate that in a meaningful way to how long it takes to hike a mile, or how much the person weights? The shoe size does not convey the required information which would help us answer those questions. Likewise, knowing someone’s eye color, does not indicate their hair color, nor can we tell (from their eye color) how well they read.
In brief, the measure we obtain must meaningfully correspond to the question being asked, and the answer we seek. Watch ticks, pins dropping, nails being clipped and whispers create sound, but they are not meaningful, calibrated, or reproducible regarding our ability to hear, listen and understanding speech.
Hearing is the first step in listening. Listening is the end-goal.
Douglas L. Beck, Au.D.
Doctor of Audiology
Excellence in Audiology Contributor