An infant child possesses an amazing, and fleeting, gift: the ability to master a language quickly. At six months, the child can learn the sounds that make up English words and, if also exposed to Quechua and Tagalog, he or she can pick up the unique acoustic properties of those languages, too. By age three, a toddler can converse with a parent, a playmate or a stranger.
I still marvel, after four decades of studying child development, how a child can go from random babbling to speaking fully articulated words and sentences just a few years later—a mastery that occurs more quickly than any complex skill acquired during the course of a lifetime. Only in the past few years have neuroscientists begun to get a picture of what is happening in a baby's brain during this learning process that takes the child from gurgling newborn to a wonderfully engaging youngster.
At birth, the infant brain can perceive the full set of 800 or so sounds, called phonemes, that can be strung together to form all the words in every language of the world. During the second half of the first year, our research shows, a mysterious door opens in the child's brain. He or she enters a “sensitive period,” as neuroscientists call it, during which the infant brain is ready to receive the first basic lessons in the magic of language.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
The time when a youngster's brain is most open to learning the sounds of a native tongue begins at six months for vowels and at nine months for consonants. It appears that the sensitive period lasts for only a few months but is extended for children exposed to sounds of a second language. A child can still pick up a second language with a fair degree of fluency until age seven.
The built-in capacity for language is not by itself enough to get a baby past the first utterances of “Mama” and “Dada.” Gaining mastery of the most important of all social skills is helped along by countless hours listening to parents speak the silly vernacular of “parentese.” Its exaggerated inflections—“You're a preettee babbee”—serve the unfrivolous purpose of furnishing daily lessons in the intonations and cadences of the baby's native tongue. Our work puts to rest the age-old debates about whether genes or the environment prevails during early language development. They both play starring roles.
Knowledge of early language development has now reached a level of sophistication that is enabling psychologists and physicians to fashion new tools to help children with learning difficulties. Studies have begun to lay the groundwork for using recordings of brain waves to determine whether a child's language abilities are developing normally or whether an infant may be at risk for autism, attention deficit or other disorders. One day a routine visit to the pediatrician may involve a baby brain examination, along with vaccinations for measles, mumps and rubella.
The statistics of baby talk
The reason we can contemplate a test for language development is that we have begun to understand how babies absorb language with seeming ease. My laboratory and others have shown that infants use two distinct learning mechanisms at the earliest stages of language acquisition: one that recognizes sound through mental computation and another that requires intense social immersion.
To learn to speak, infants have to know which phonemes make up the words they hear all around them. They need to discriminate which 40 or so, out of all 800, phonemes they need to learn to speak words in their own language. This task requires detecting subtle differences in spoken sound. A change in a single consonant can alter the meaning of a word—“bat” to “pat,” for instance. And a simple vowel like “ah” varies widely when spoken by different people at different speaking rates and in different contexts—“Bach” versus “rock.” Extreme variation in phonemes is why Apple's Siri still does not work flawlessly.
My work and that of Jessica Maye, then at Northwestern University, and her colleagues have shown that statistical patterns—the frequency with which sounds occur—play a critical role in helping infants learn which phonemes are most important. Children between eight and 10 months of age still do not understand spoken words. Yet they are highly sensitive to how often phonemes occur—what statisticians call distributional frequencies. The most important phonemes in a given language are the ones spoken most. In English, for example, the “r” and “l” sounds are quite frequent. They appear in words such as “rake” and “read” and “lake” and “lead.” In Japan, the English-like “r” and “l” also occur but not as often. Instead the Japanese “r” sound is common but is rarely found in English. (The Japanese word “raamen” sounds like “laamen” to American ears because the Japanese “r” is midway between the American “r” and “l.”)
The statistical frequency of particular sounds affects the infant brain. In one study of infants in Seattle and Stockholm, we monitored their perception of vowel sounds at six months and demonstrated that each group had already begun to focus in on the vowels spoken in their native language. The culture of the spoken word had already pervaded and affected how the baby's brain perceived sounds.
What exactly was going on here? Maye has shown that the brain at this age has the requisite plasticity to change how infants perceive sounds. A Japanese baby who hears sounds from English learns to distinguish the “r” and the “l” in the way they are used in the U.S. And a baby being raised among native English speakers could likewise pick up the characteristic sounds of Japanese. It appears that learning sounds in the second half of the first year establishes connections in the brain for one's native tongue but not for other languages, unless a child is exposed to multiple languages during that period.
Later in childhood, and particularly as an adult, listening to a new language does not produce such dramatic results—a traveler to France or Japan can hear the statistical distributions of sounds from another language, but the brain is not altered by the experience. That is why it is so difficult to pick up a second language later on.
A second form of statistical learning lets infants recognize whole words. As adults, we can distinguish where one word ends and the next begins. But the ability to isolate words from the stream of speech requires complex mental processing. Spoken speech arrives at the ear as a continuous stream of sound that lacks the separations found between written words.
Jenny Saffran, now at the University of Wisconsin–Madison, and her colleagues—Richard Aslin of the University of Rochester and Elissa Newport, now at Georgetown University—were the first to discover that a baby uses statistical learning to grasp the sounds of whole words. In the mid-1990s Saffran's group published evidence that eight-month-old infants can learn wordlike units based on the probability that one syllable follows another. Take the phrase “pretty baby.” The syllable “pre” is more likely to be heard with “ty” than to accompany another syllable like “ba.”
In the experiment, Saffran had babies listen to streams of computer-synthesized nonsense words that contained syllables, some of which occurred together more often than others. The babies' ability to focus on syllables that coincide in the made-up language let them identify likely words.
The discovery of babies' statistical-learning abilities in the 1990s generated a great deal of excitement because it offered a theory of language learning beyond the prevailing idea that a child learns only because of parental conditioning and affirmations of whether a word is right or wrong. Infant learning occurs before parents realize that it is taking place. Further tests in my lab, however, produced a significant new finding that lends an important caveat to this story: the statistical-learning process does not require passive listening alone.
Baby meet and greet
In our work, we discovered that infants need to be more than just computational geniuses processing clever neural algorithms. In 2003 we published the results of experiments in which nine-month-old infants from Seattle were exposed to Mandarin Chinese. We wanted to know whether infants' statistical-learning abilities would allow them to learn Mandarin phonemes.
In groups of two or three, the nine-month-olds listened to Mandarin native speakers while their teachers played on the floor with them, using books and toys. Two additional groups were also exposed to Mandarin. But one watched a video of Mandarin being spoken. Another listened to an audio recording. A fourth group, run as a control, heard no Mandarin at all but instead listened to U.S. graduate students speaking English while playing with the children with the same books and toys. All of this happened during 12 sessions that took place over the course of a month.
Infants from all four groups returned to the lab for psychological tests and brain monitoring to gauge their ability to single out Mandarin phonemes. Only the group exposed to Chinese from live speakers learned to pick up the foreign phonemes. Their performance, in fact, was equivalent to infants in Taipei who had been listening to their parents for their first 11 months.
Infants who were exposed to Mandarin by television or audio did not learn at all. Their ability to discriminate phonemes matched infants in the control group, who, as expected, performed no better than before the experiment.
The study provided evidence that learning for the infant brain is not a passive process. It requires human interaction—a necessity that I call “social gating.” This hypothesis can even be extended to explain the way many species learn to communicate. The experience of a young child learning to talk, in fact, resembles the way birds learn song.
I worked earlier with the late Allison Doupe of the University of California, San Francisco, to compare baby and bird learning. We found that for both children and zebra finches, social experience in the early months of life was essential. Both human and bird babies immerse themselves in listening to their elders, and they store memories of the sounds they hear. These recollections condition the brain's motor areas to produce sounds that match those heard frequently in the larger social community in which they were being raised.
Exactly how social context contributes to the learning of a language in humans is still an open question. I have suggested, though, that parents and other adults provide both motivation and necessary information to help babies learn. The motivational component is driven by the brain's reward systems—and, in particular, brain areas that use the neurotransmitter dopamine during social interaction. Work in my lab has already shown that babies learn better in the presence of other babies—we are currently engaged in studies that explain why this is the case.
Babies who gaze into their parents' eyes also receive key social cues that help to speed the next stage of language learning—the understanding of the meaning of actual words. Andrew Meltzoff of the University of Washington has shown that young children who follow the direction of an adult's gaze pick up more vocabulary in the first two years of life than children who do not track these eye movements. The connection between looking and talking makes perfect sense and provides some explanation of why simply watching an instructional video is not good enough.
In the group that received live lessons, infants could see when the Mandarin teacher glanced at an object while naming it, a subtle action that tied together the word with the object named. In a paper published in July, we also showed that as a Spanish tutor holds up new toys and talks about them, infants who look back and forth between the tutor and the toy, instead of just focusing on one or the other, learn the phonemes as well as words used during the study session. This example is an illustration of my theory that infants' social skills enable—or “gate”—language learning.
These ideas about the social component of early language learning may also explain some of the difficulties encountered by infants who go on to develop disorders such as autism. Children with autism lack basic interest in speaking. Instead they fixate on inanimate objects and fail to pay attention to social cues so essential in language learning.
Say, “Hiiiii!”
An infant's ability to learn to speak depends not only on being able to listen to adults but also on the manner in which grownups talk to the child. Whether in Dhaka, Paris, Riga or the Tulalip Indian Reservation near Seattle, researchers who listen to people talk to a child have learned one simple truth: an adult speaks to a child differently than to other adults. Cultural ethnographers and linguists have dubbed it “baby talk,” and it turns up in most cultures. At first, it was unknown whether baby talk might hinder language learning. Numerous studies, however, have shown that motherese or parentese, the revisionist name for baby talk, actually helps an infant learn. Parentese, in fact, is not a modern invention: Varro (116 to 27 B.C.), an ancient Roman expert on syntax, noted that certain shortened words were used only when talking to babies and young children.
My lab—and those of Anne Fernald at Stanford University and Lila Gleitman at the University of Pennsylvania—has looked at the specific sounds of parentese that intrigue infants: the higher pitch, slower tempo and exaggerated intonation. When given a choice, infants will choose to listen to short audio clips of parentese instead of recordings of the same mothers speaking to other adults. The high-pitched tone seems to act as an acoustic hook for infants that captures and holds their attention.
Parentese exaggerates differences between sounds—one phoneme can be easily discriminated from another. Our studies show that exaggerated speech most likely helps infants as they commit these sounds to memory. In a recent study by my group, Nairán Ramírez-Esparza, now at the University of Connecticut, had infants wear high-fidelity miniature tape recorders fitted into lightweight vests worn at home throughout the day. The recordings let us enter the children's auditory world and showed that if their parents spoke to them in parentese at that age, then one year later these infants had learned more than twice the number of words as those whose parents did not use the baby vernacular as frequently.
Signatures of learning
Brain scientists who study child development are becoming excited about the possibility of using our growing knowledge of early development to identify signatures of brain activity, known as biomarkers, that provide clues that a child may be running into difficulty in learning language. In a recent study in my lab, two-year-old children with autism spectrum disorder listened to both known and unfamiliar words while we monitored their brain's electrical activity when they heard these words.
We found the degree to which a particular pattern of brain waves was present in response to known words predicted the child's future language and cognitive abilities, at ages four and six. These measurements assessed the child's success at learning from other people. They show that if a youngster has the ability to learn words socially, it bodes well for learning in general.
The prospect for being able to measure an infant or toddler's cognitive development is improving because of the availability of new tools to judge their ability to detect sounds. My research group has begun to use magnetoencephalography (MEG), a safe and noninvasive imaging technology, to demonstrate how the brain responds to speech. The machine contains 306 SQUID (superconducting quantum interference device) sensors placed within an apparatus that looks like a hair dryer. When the infant sits in it, the sensors measure tiny magnetic fields that indicate specific neurons firing in the baby's brain as the child listens to speech. We have already demonstrated with MEG that there is a critical time window in which babies seem to be going through mental rehearsals to prepare to speak their native language.
MEG is too expensive and difficult to use in a neighborhood medical clinic. But these studies pave the way by identifying biomarkers that will eventually be measured with portable and inexpensive sensors that can be used outside a university lab.
If reliable biomarkers for language learning can be identified, they should help determine whether children are developing normally or at risk for early-life, language-related disabilities, including autism spectrum disorder, dyslexia, fragile X syndrome and other disorders. By understanding the brain's uniquely human capacity for language—and when exactly it is possible to shape it—we may be able to administer therapies early enough to change the future course of a child's life.