Infants who have trouble distinguishing between native and nonnative speech may be more likely to develop speech-language disorders or to face challenges with grammar and vocabulary later in life, according to new research from scientists at Vanderbilt University and the University of Washington.
The findings, published March 31 in Developmental Cognitive Neuroscience, identified infants' sensitivity to speech differences at 11 months old and followed up with those children when they were six years old, an age at which children often develop more complex grammar skills. The researchers relied on magnetoencephalography, a technique that allows scientists to view electrical signals in the brain, to pinpoint the degree to which infants responded to familiar or unfamiliar tones.
The infants repeatedly heard a fictitious word made up of consonants and a short vowel sound that was meant to replicate a common sound heard in English speech. The familiar-sounding word was interspersed with another fictitious word — this time, one that included consonants and a long vowel sound that is not found in the English language. Unlike Finnish or Japanese, the English language does not employ long consonant sounds to differentiate between words. These unique tonal differences help researchers isolate how infants' brains perceive familiar English syllables as compared with foreign ones.
Infants who were better able to distinguish between native and nonnative sounds performed better in evaluations with speech-language pathologists at age six. An analysis of infants' prefrontal cortices during the exercise could predict whether an infant would go on to present speech-language disorder symptoms over 86% of the time.
The results appear to line up with the "sensitive period" theory, which posits that infants are particularly attuned to phonetic sounds in their environment between the ages of six and twelve months, and that this sensitivity could have long-term consequences for a child's speaking ability.
"When babies are born — when they're young — they are sensitive to all sorts of sound differences. They're ready to learn any language that's in their environment," Christina Zhao, the director of the Lab for Early Auditory Perception at the University of Washington and the paper's corresponding author, told The Academic Times. "So they have to kind of go through this process of becoming more specialized in their native language."
During this critical time, infants become more responsive to familiar sounds and syllables while learning to shut out speech sounds that are not related to the language that is most common in their environment. And because this speech discrimination helps infants go on to develop more complex language skills, infants who have not yet developed the ability to focus on native sounds may be at heightened risk of developing speech-language disorders.
With only 27 participants, these are only preliminary results, Zhao emphasized, but if the findings are replicated on a larger scale, similar tests could be used to flag children who may be more likely to develop language-speech disorders, which encompass a wide range of issues related to articulation, expression and fluency.
"This definitely opens a lot of possibilities for us for future studies," Zhao said. "And so we're very excited to follow up on these results with hopefully bigger, stronger datasets and better models down the line."
Researchers also hope to identify early interventions, like specialized language lessons or social learning techniques, that may help infants who have trouble identifying native speech patterns. Prior studies by Zhao and her University of Washington colleagues have suggested that music exposure as an infant can also lead to structural changes in the brain that may promote early language acquisition.
Although the results included only children who grew up in monolingual environments, researchers would like to better understand how the presence of second or third languages augments one's ability to learn languages as an infant. The standardized tests that monitor children's language development may also need reform, Zhao said, since most were built and designed with monolingual speakers in mind.
The parameters of long-term studies — the need to track the same participants over many years — often adds additional challenges to an already complex field. But Zhao is thankful that volunteers have remained willing to participate in each study, to help scientists gain a more robust understanding of early language processing.
"This study was done so many years ago, and we call [parents] four or five years later, and most of the people are so enthusiastic. They're like, 'Of course, we'll come!'" Zhao said. "Our type of research is really not possible without family participation."
The study, "Infants' neural speech discrimination predicts individual differences in grammar ability at 6 years of age and their risk of developing speech-language disorders," published March 31 in Developmental Cognitive Neuroscience, was authored by Christina Zhao and Patricia K. Kuhl, University of Washington; and Olivia Boorom and Reyna Gordon, Vanderbilt University Medical Center.