Children generally hear high-frequency sounds better than adults do, but they are not as good as adults at filtering out background noise while listening to speech. Now, scientists have found that both groups can detect and understand speech in noisy settings better when they can hear the various components of a talker's voice, including extended high frequencies.
In a paper published April 8 in Hearing Research, a trio of researchers from the University of Illinois at Urbana-Champaign studied how well children can distinguish between overlapping voice conversations either with or without extended high-frequency sounds, which convey information relevant to the listeners. The normal hearing frequency range for healthy humans is between 20 hertz and 20,000 hertz, or 20 kilohertz, though this worsens with age, and anything above 8 kilohertz is defined as extended high-frequency hearing.
Though children are able to hear higher-pitched sound frequencies better than adults, they have more issues with detecting and differentiating speech in noisy environments because the central auditory-processing system in their brain is still maturing, Mary Flaherty, first author of the paper and an assistant professor at UIUC, told The Academic Times.
"Traditionally, we haven't cared about high frequencies, but, nowadays, we're starting to realize that those high frequencies do offer information for the speech signal," Flaherty said. "Maybe you don't need it to identify a sound, but if you have it, it makes it a whole lot easier, especially in a noisy environment."
Flaherty explained that a person's voice can be thought of as a musical chord made up of several different notes. Voices are made up of different features, such as voice pitch, vocal-tract length and voice frequency. When a person speaks, we don't necessarily hear each individual component or cue included in their voice; we perceive all the parts as one speech signal, in the same way notes come together to form a chord.
Extended high-frequency information in a speech signal helps a listener distinguish between sounds such as "s" and "t." The listener benefits the most from it when hearing someone speak in person, while much frequency information is likely to be lost in voice reproduction, such as over the phone or in videoconferencing. In these cases, humans can usually comprehend speech based on context clues, even if they are not hearing the full speech signal.
"The traditional view has been that [extended high-frequency] hearing holds little utility for speech perception," the authors said in the paper. "[But] it has been demonstrated that EHF hearing for adults with normal hearing contributes to speech localization, judgments of speech and voice quality, discrimination of a talker's head orientation, and speech recognition in noise."
"Studies of [extended high-frequency] hearing in children are sparse and are generally focused on pure tone thresholds in quiet," the authors continued, noting that there has been little research specifically examining the value of extended high frequencies for speech perception in children.
Senior author Brian B. Monson published a study in 2019, demonstrating that access to extended high frequencies improved speech recognition in noisy environments for adults in their twenties with normal hearing, suggesting that such sounds could help listeners focus on one speaker among many overlapping conversations, thereby addressing what is known as "the cocktail party problem." The new study replicated this with children to assess how useful extended high-frequency information is for speech recognition. Both studies also evaluated whether the direction the speakers were facing, or their head orientation, had any effect.
The authors hypothesized that because children are able to hear extended high frequencies better than adults, children would benefit more from having extended high frequencies present in speech recordings and be able to detect speech better than the adults in Monson's 2019 study. Yet this was not the case: Children and adults benefited equally from the presence of extended high frequencies, even though the adults could not hear them as well as the children.
The sample for the current study included 39 children between the ages of 5 and 17, all of whom had normal hearing. They performed the same task as the adults, which involved sitting in a sound booth in front of one audio speaker and listening to recordings of multiple people talking over each other. Two background voices held a conversation while a third primary voice spoke. The participants were instructed to listen to the primary voice and repeat what they said.
The task started out simply, with the primary voice at a louder volume than the background voices and containing full extended high frequency information. But throughout the sessions, the volume and frequency of the recordings were manipulated to make it more difficult to understand what the primary voice was saying. The primary voice was also recorded facing the microphone directly, while the background voices were recorded facing at a slight angle away from the microphone, to mimic the sounds of a real-life setting, such as a crowded restaurant. If a talker is angled away from the listener, less of the extra energy from the extended high frequencies will be available to the listener.
The researchers found that while children were more sensitive to extended high frequencies than adults, they were not more affected than adults by the presence or absence of high frequencies. "We know that they didn't show a much larger effect than adults, so they're using the cue about similarly," Flaherty said.
There was no indication that children's enhanced ability to detect extended high frequencies provided enhanced ability to use them in contexts with competing talkers, and there was also no significant effect of the angled head orientation of the background voices on children's performance relative to that of adults. But the sample of children did show a gradual improvement in speech recognition as their age increased, and high frequency information was confirmed to be a key factor in kids' speech perception.
"The most important takeaway is that extended high-frequency energy is important for children's speech understanding," Flaherty said. "So if extended high-frequency energy is not present, children perform poorly, meaning they can't understand speech as well in noise. In this context we need to do everything we can to make children's speech perception in these types of environments easier."
This has significant implications on the way hearing aids are constructed, Flaherty said, because current models cut off speech at a certain frequency and do not typically represent extended high frequencies. But because this information is proving to be useful for adults and children, it may be important to include the full range of frequencies.
People have also struggled with speech detection in the last year while wearing face masks during the COVID-19 pandemic. Flaherty explained that the masks muffle our speech, effectively removing the information embedded in high-frequency sound; this could have a big impact on children, especially when they are learning in a classroom setting.
In the future, Flaherty and Monson plan to continue this research by studying whether children who have superior high-frequency hearing may perform better on these listening tasks than other children. They are currently studying children's speech perception while speakers wear face masks to establish whether those masks may negatively affect performance.
"I think it's a mix of both the lack of visual cues and these extra acoustic cues that would make our life easier in speech understanding [with masks]," Flaherty said. "Cutting that out can be problematic, especially when people have hearing loss."
The study, "Extended high-frequency hearing and head orientation cues benefit children during speech-in-speech recognition," published April 8 in Hearing Research, was authored by Mary Flaherty, Kelsey Libert and Brian B. Monson, University of Illinois at Urbana-Champaign.