NurtureShock_ New Thinking About Children - Part 9
Library

Part 9

Back in 1997, Aigner-Clark's product seemed to piggyback on Kuhl's research. But that's quite ironic, because in the years since, Patricia Kuhl's ongoing findings have helped explain why baby DVDs don't don't work. work.

First, in a longitudinal study, Kuhl showed that neural commitment to a primary language isn't a bad thing. The more "committed" a baby's brain is, at nine months old, the more advanced his language will be at three years old. With a weaker connection, children don't progress as quickly, and this seems to have lasting impact.

Second, Kuhl went on to discover that babies' brains do not learn to recognize foreign-language phonemes off a videotape or audiotape-at all. They absolutely do learn from a live, human teacher. In fact, babies' brains are so sensitive to live human speech that Kuhl was able to train American babies to recognize Mandarin phonemes (which they'd never heard before) from just twelve sessions with her Chinese graduate students, who sat in front of the kids for twenty minutes each session, playing with them while speaking in Mandarin. By the end of the month, three sessions per week, those babies' brains were virtually as good at recognizing Mandarin phonemes as the brains of native-born Chinese infants who'd been hearing Mandarin their entire young lives.

But when Kuhl put American infants in front of a videotape or audio recording of Mandarin speech, the infants' brains absorbed none of it. They might as well have heard meaningless noise. This was true despite despite seeming to be quite engaged by the videos. Kuhl concluded: "The more complex aspects of language, such as phonetics and grammar, are not acquired from TV exposure." seeming to be quite engaged by the videos. Kuhl concluded: "The more complex aspects of language, such as phonetics and grammar, are not acquired from TV exposure."

By implication, we can conclude that baby DVDs don't don't delay neural commitment; rather, they have virtually no effect on auditory processing. delay neural commitment; rather, they have virtually no effect on auditory processing.

The irony here only deepens. One might have noticed that all of these scholars are at the University of Washington. Kuhl and Meltzoff are Co-Directors of the same lab. So when Disney CEO Iger attacked the Pediatrics Pediatrics scholars, he was attacking the very laboratory and inst.i.tution that Baby Einstein had hailed, when its Language Nursery DVD was first released. scholars, he was attacking the very laboratory and inst.i.tution that Baby Einstein had hailed, when its Language Nursery DVD was first released.

So why does an infant need a live human speaker to learn language from? Why are babies learning nothing from the audio track of a baby DVD, while their language isn't impaired by exposure to regular TV?

The evidence suggests one factor is that baby DVDs rely on disembodied audio voice-overs, unrelated to the abstract imagery of the video track. Meanwhile, grown-up television shows live actors, usually close up-kids can see their faces as they talk. Studies have repeatedly shown that seeing a person's face makes a huge difference.

Babies learn to decipher speech partly by lip-reading: they watch how people move their lips and mouths to produce sounds. One of the first things that babies must learn-before they can comprehend any word meanings-is when one word ends and another begins. Without segmentation, an adult's words probably sound about the same to an infant as does his own babbling. At 7.5 months, babies can segment the speech of people they see speaking. However, if the babies hear speech while looking at an abstract shape, instead of a face, they can't segment the sounds: the speech once again is just endless gibberish. (Even for adults, seeing someone's lips as he speaks is the equivalent of a 20-decibel increase in volume.) When a child sees someone speak and hears his voice, there are two sensory draws-two simultaneous events both telling the child to pay attention to this single object of interest-this moment of human interaction. The result is that the infant is more focused, remembers the event, and learns more. Contrast that to the disconnected voice-overs and images of the baby videos. The sensory inputs don't build on each other. Instead, they compete.

Would baby DVDs work better if they showed human faces speaking? Possibly. But there's another reason-a more powerful reason-why language learning can't be left to DVDs. Video programming can't interact with the baby, responding to the sounds she makes. Why this is so important requires careful explanation.

Wondering what parents' prevailing a.s.sumptions about language acquisition were, we polled some parents, asking them why they thought one kid picked up language far faster than another. Specifically, we were asking about two typically-developing kids, without hearing or speech impairments.

Most parents admitted they didn't know, but they had absorbed a little information here and there to inform their guesses. One of these parents was Anne Frazier, mother to ten-month-old Jon and a litigator at a prestigious Chicago law firm; she was working part-time until Jon turned one. Frazier had a Chinese client base and, before having Jon, occasionally traveled to Asia. She'd wanted to learn Mandarin, but her efforts were mostly for naught. She had decided that she was too old-her brain had lost the necessary plasticity-so she was determined to start her son young. When she was dressing or feeding her baby, she had Chinese-language news broadcasts playing on the television in the background. They never sat down just to watch television-she didn't think that would be good for Jon-but Frazier did try to make sure her child heard twenty minutes of Mandarin a day. She figured it couldn't hurt.

Frazier also a.s.sumed that Jon would prove to have some level of innate verbal ability-but this would be affected by the sheer amount of language Jon was exposed to. Having a general sense that she needed to constantly talk to her child, Frazier was submitting her kid to a veritable barrage of words.

"Nonstop chatter throughout the day," she affirmed. "As we run errands, or take a walk, I describe what's on the street-colors, everything I see. It's very easy for a mother to lose her voice."

She sounded exhausted describing it. "It's hard to keep talking to myself all the time," Frazier confessed. "Infants don't really contribute anything to the conversation."

Frazier's story was similar to many we heard. Parents were vague on the details, but word had gotten out that innate ability wasn't the only factor: children raised in a more robust, language-intensive home will hit developmental milestones quicker. This is also the premise of popular advice books for parents of newborns, which usually devote a page to reminding parents to talk a lot to their babies, and around their babies. A fast-selling new product being sold to parents is the $699 "verbal pedometer," a sophisticated gadget the size of a cell phone that can be slipped into the baby's pocket or car seat. It counts the number of words the baby hears during an hour or day.

The verbal pedometer is actually used by many researchers who study infants' exposure to language. The inspiration behind such a tool is a famous longitudinal study by Drs. Betty Hart and Todd Risley, from the University of Kansas, published in 1994.

Hart and Risley went into the homes of a variety of families with a seven- to nine-month-old infant. They videotaped an hour of interactions while the parent was feeding the baby or doing ch.o.r.es with the baby nearby-and they repeated this once a month until the children were three. Painstakingly breaking down those tapes into data, Hart and Risley found that infants in welfare families heard about 600 words per hour. Meanwhile, the infants of working-cla.s.s families heard 900 words per hour, and the infants of professional-cla.s.s families heard 1,500 words per hour. These gaps only increased when the babies turned into toddlers-not because the parents spoke to their children more often, but because they communicated in more complex sentences, adding to the word count.

This richness of language exposure had a very strong correlation to the children's resulting vocabulary. By their third birthday, children of professional parents had spoken vocabularies of 1,100 words, on average, while the children of welfare families were less than half as articulate-speaking only 525 words, on average.

The complexity, variety, and sheer amount of language a child hears is certainly one driver of language acquisition. But it's not scientifically clear that merely hearing lots of language is the crucial, dominant factor. For their part, Hart and Risley wrote pages listing many other variables at play, all all of which had correlations with the resulting rate at which the children learned to speak. of which had correlations with the resulting rate at which the children learned to speak.

In addition, the words in the English language that children hear most often are words like "was," "of," "that," "in," and "some"-these are termed "closed cla.s.s" words. Yet children learn these words the most slowly-usually not until after their second birthday. By contrast, children learn nouns first, even though nouns are the least commonly-occurring words in parents' natural speech to children.

The basic paradigm, that a child's language output is a direct function of the enormity of input, also doesn't explain why two children, both of whom have similar home experiences (they might both have highly educated, articulate mothers, for instance) can acquire language on vastly divergent timelines.

A decade ago, Hart and Risley's work was the cutting edge of language research. It's still one of the most quoted and cited studies in all of social science. But in the last decade, other scholars have been flying under the radar, teasing out exactly what's happening in a child's first two years that pulls her from babble to fluent speech.

If there's one main lesson from this newest science, it's this: the basic paradigm has been flipped. The information flow that matters most is in the opposite direction we previously a.s.sumed. The central role of the parent is not to push ma.s.sive amounts of language into into the baby's ears; rather, the central role of the parent is to notice what's coming the baby's ears; rather, the central role of the parent is to notice what's coming from from the baby, and respond accordingly-coming from his mouth, his eyes, and his fingers. If, like Anne Frazier, you think a baby isn't contributing to the conversation, you've missed something really important. the baby, and respond accordingly-coming from his mouth, his eyes, and his fingers. If, like Anne Frazier, you think a baby isn't contributing to the conversation, you've missed something really important.

In fact, one of the mechanisms helping a baby to talk isn't a parent's speech at all-it's not what a child hears hears from a parent, but what a parent accomplishes with a well-timed, loving caress. from a parent, but what a parent accomplishes with a well-timed, loving caress.

Dr. Catherine Tamis-LeMonda, of New York University, has spent the last decade looking specifically at parent-responsiveness to infants, and its impact on language development. Along with Dr. Marc Bornstein of the National Inst.i.tutes of Health, she sent teams of researchers into homes of families with nine-month-old babies. For the most part, these were affluent families with extremely well-educated parents living in the New York City area. The researchers set some age-appropriate toys down on the floor and asked the mother to play with her child for ten minutes.

These interactions were videotaped, and the ten-minute tapes were later broken down second by second. Every time the baby looked to the mother, or babbled, or reached for a toy was noted. The children did this, on average, about 65 times in ten minutes, but some kids were very quiet that day and others very active. Every time the mother responded, immediately, was also noted. The moms might say, "Good job," or "That's a spoon," or "Look here." The moms responded about 60 percent of the time. Responses that were late, or off-timed (outside a five-second window), were categorized separately.

The researchers then telephoned the mothers every week, for the next year, to track what new words the child was using that week-guided by a checklist of the 680 words and phrases a toddler might know. This created a very accurate record of each child's progression. (They also repeated the in-home videotape session when the infant was thirteen months old, to get a second scoring of maternal responsiveness.) On average, the children in Tamis-LeMonda's study said their first words just before they were thirteen months old. By eighteen months, the average toddler had 50 words in her vocabulary, was combining words together, and was even using language to talk about the recent past. But there was great variability within this sample, with some tots. .h.i.tting those milestones far earlier, others far later.

The variable that best explained these gaps was how often a mom rapidly responded to her child's vocalizations and explorations. The toddlers of high-responders were a whopping six months ahead of the toddlers of low-responders. They were saying their first word at ten months, and reaching the other milestones by fourteen months.

Remember, the families in this sample were all well-off, so all all the children were exposed to robust parent vocabularies. All the infants heard lots of language. How often a mother initiated a conversation with her child was not predictive of the language outcomes-what mattered was, if the infant initiated, whether the mom responded. the children were exposed to robust parent vocabularies. All the infants heard lots of language. How often a mother initiated a conversation with her child was not predictive of the language outcomes-what mattered was, if the infant initiated, whether the mom responded.

"I couldn't believe there was that much of a shift in developmental timing," Tamis-LeMonda recalled. "The shifts were hugely dramatic." She points to two probable mechanisms to explain it. First, through this call-and-response pattern, the baby's brain learns that the sounds coming out of his mouth affect his parents and get their attention-that voicing is important, not meaningless. Second, a child needs to a.s.sociate an object with a word, so the word has to be heard just as an infant is looking at or grabbing it.

In one paper, Tamis-LeMonda compares two little girls in her study, Hannah and Alyssa. At nine months old, both girls could understand about seven words, but weren't saying any yet. Hannah was vocalizing and exploring only half as often as Alyssa-who did so 100 times during the ten minutes recorded. But Hannah's mom was significantly more responsive. She missed very few opportunities to respond to Hannah, and described whatever Hannah was looking at twice as often as Alyssa's mother did with Alyssa. At thirteen months, this gap was confirmed: Hannah's mom responded 85% of the time, while Alyssa's mom did so about 55% of the time.

Meanwhile, Hannah was turning into a chatterbox. Alyssa progressed slowly. And the gap only increased month by month. During their eighteenth month, Alyssa added 8 new words to her productive vocabulary, while in that same single-month period, Hannah added a phenomenal 150 words, 50 of which were verbs and adjectives.

At twenty-one months, Alyssa's most complicated usages were "I pee" and "Mama bye-bye," while Hannah was using prepositions and gerunds regularly, saying sentences like: "Yoni was eating an onion bagel." By her second birthday, it was almost impossible to keep track of Hannah's language, since she could say just about anything.

This variable, how a parent responds to a child's vocalizations-right in the moment-seems to be the most powerful mechanism pulling a child from babble to fluent speech.

Now, if we take a second look at the famous Hart and Risley study, in light of Tamis-LeMonda's findings, this same mechanism is apparent. In Hart and Risley's data, the poor parents initiated conversations just as often with their tots as affluent parents (about once every two minutes). Those initiations were even slightly richer in language than those of the affluent parents. But the real gap was in how parents responded responded to their children's actions and speech. to their children's actions and speech.

The affluent parents responded to what their child babbled, said, or did over 200 times per hour-a vocal response or a touch of the hand was enough to count. Each time the child spoke or did something, the parent quickly echoed back. The parents on welfare responded to their children's words and behavior less than half as often, occupied with the burden of ch.o.r.es and larger families. (Subsequent a.n.a.lysis by Dr. Gary Evans showed that parent responsiveness was also dampened by living in crowded homes; crowding leads people to psychologically withdraw, making them less responsive to one another.) Tamis-LeMonda's scholarship relies on correlations-on its own, it's not actually proof that parent-responsiveness causes causes infants to speed up their language production. To really be convinced that one triggers the other, we'd need controlled experiments where parents increase their response rate, and track if this leads to real-time boosts in infant vocalization. infants to speed up their language production. To really be convinced that one triggers the other, we'd need controlled experiments where parents increase their response rate, and track if this leads to real-time boosts in infant vocalization.

Luckily, those experiments have been done-by Dr. Michael Goldstein at Cornell University. He gets infants to change how they babble, in just ten minutes flat.

The first time a mother and her infant arrive for an appointment at Michael Goldstein's lab in the psychology building on the Cornell University campus, they're not tested at all. They're simply put in a quiet room with a few toys, for half an hour, to get used to the setting. The walls are white, decorated with Winnie the Pooh stickers. The carpet is light brown and comfortable to sit on. On the floor are many of the same playthings the infant might have at home-a brightly colored stuffed inchworm, stacking rings, a play mat with removable shapes, and a toy chest to explore. At three points around the room, videocameras extend from the wall, draped in white cloth to be inconspicuous. The mom knows full well that she is being watched, both on camera and through a large one-way gla.s.s pane. But this is otherwise a nice moment to interact with her baby-she can't be distracted by the cell phone or household ch.o.r.es. Her baby pulls himself to his mom's lap, puts the toys nearby in his mouth, and if he can crawl, perhaps pulls himself up to look inside the toy chest.

The next day, mother and baby return. In Goldstein's seminal experiment, the nine-month-old infant is put in denim overalls that carry a very sensitive wireless microphone in the chest pocket. The mother is given a pair of wireless headphones that still allow her to hear her baby. They are put back in the playroom, and again asked to play together naturally. After ten minutes, a researcher's voice comes over the headphones with instructions. When the mom hears the prompt "Go ahead," she's supposed to lean in even closer to her baby, pat or rub the child, and maybe give him a kiss.

The mom doesn't know what triggers this cue. The mom just knows that, over the next ten minutes, she hears "Go ahead" a lot, almost six times a minute. She might notice that her baby is vocalizing more-or that he's waving his arms or flapping his feet-but she won't know what's triggering this. For the final ten minutes, she's asked to simply play and interact naturally with her child again.

When mother and infant leave, she has almost no idea what the researchers might have been up to. For two half-hour periods, she merely played and talked with her child.

But here's what it was like on the other side of the one-way gla.s.s: during those middle ten minutes, every time the child made a voiced sound (as opposed to a cough, grunt, or raspberry), it could be heard loudly over speakers in the observation room. Immediately, the researcher told the mother to "go ahead," and within a second the mother had affectionately touched the child. Later that night, a graduate student would sit down with the session videotape and take notes, second by second, tracking how often the baby babbled, and what quality of sounds he made.

While all baby babble might sound like gibberish, there's actually a progression of overlapping stages, with each type of babble more mature and advanced than the one prior. "No less than eighty muscles control the vocal tract, which takes a year or more to gain control of," Goldstein explained. From birth, children make "quasi-resonant" vowel sounds. They use the back of the vocal tract with a closed throat and little breath support. Because the larynx hasn't yet descended, what breath there is pa.s.ses through both the mouth and nose. The result is nasal and creaky, often sounding like the baby is fussy (which it's not).

While the child won't be able to make the next-stage sound for several months, there's still a very important interaction with parents going on. They basically take turns "talking," as if having a mock conversation. The baby coos, and the daddy responds, "Is that so?" The baby babbles again, and the daddy in jest returns, "Well, we'll have to ask Mom."

While most parents seem to intuit their role in this turn-taking pattern spontaneously-without being told to do so by any handbook-they don't all do so equally well. A remarkable study of vocal turn-taking found that when four-month-old infants and their parents exhibited better rhythmic coupling, those children would later have greater cognitive ability.

According to Goldstein, "Turn-taking is driving the vocal development-pushing the babies to make more sophisticated sounds."

Parents find themselves talking to their baby in the singsongy cadence that's termed "parentese," without knowing why they're strangely compelled to do so. They're still using English, but the emotional affect is giddily upbeat and the vowels are stretched, with highly exaggerated pitch contours. It's not cultural-it's almost universal-and the phonetic qualities help children's brains discern discrete sounds.

Around five months, a baby has gained enough control of the muscles in the vocal tract to open her throat and push breath through to occasionally produce "fully resonant" vowels. "To a mother of a five-month-old," Goldstein said, "hearing a fully resonant sound from her baby is a big deal. It's very exciting." If her response is well-timed, the child's brain notices the extra attention these new sounds win. At this point, parents start to phase out responding to all the old sounds, since they've heard them so often. That selective responsiveness in turn further pushes the child toward more fully-resonant sounds.

Soon the baby is adding "marginal syllables," consonant-vowel transitions-rather than "goo" and "coo," more like "ba" and "da," using the articulators in the front of the mouth. However, the transition from the consonant to the vowel is drawn out, since the tongue and teeth and upper cleft can't get out of the way fast enough, causing the vowel to sound distorted. (This is why so many of a baby's first words start with B B and and D D-those are the first proper consonants the muscles can make.) As early as six months, but typically around nine months, infants start producing some "canonical syllables," the basic components of adult speech. The consonant-vowel transition is fast, and the breath is quick. The child is almost ready to combine syllables into words. "We used nine-month-olds in our study because at that age, they are still commonly expressing all four types of babble," Goldstein said. Quasi-resonant vowel babble might still be in the majority, and canonical syllables quite rare.

With this developmental scale in mind, it's shocking to hear the difference in how the baby vocalizes over the course of Goldstein's experiment. In the first ten minutes (that baseline natural period when the mom responded as she might at home), the average child vocalized 25 times. The rate leapt to 55 times in the middle ten minutes, when the mom was being coached to "go ahead" by Goldstein. The complexity and maturity of the babble also shot up dramatically; almost all vowels were now fully voiced, and the syllable formation improved. Canonical syllables, previously infrequent, now were made half the time, on average.

To my ear, it was stunning-the children literally sounded five months older, during the second ten-minute period, than they had in the first.

"What's most important to note here is that the infant was not mimicking his parent's sounds," Goldstein noted.

During those middle ten minutes, the parent was only caressing the child, to reward the babble. The child wasn't hearing much out of his mother's mouth. But the touching, by itself, had a remarkable effect on the frequency and maturity of the babble.

Goldstein reproduced the experiment, asking parents to speak to their children as well as touch them. Specifically, he told half the parents what vowel sound to make, the other half he fed a consonant-vowel syllable that was wordlike, such as "dat." Not surprisingly, the tots who heard vowels uttered more vowels, and those who heard syllables made more canonical syllables. Again though, the babies weren't repeating the actual vowel or consonant-vowel. Instead, they adopted the phonological pattern. Parents who said "ahh" might hear an "ee" or an "oo" from their baby, and those who said "dat" might hear "bem." At this tender age, infants aren't yet attempting to parrot the actual sound a parent makes; they're learning the consonant-vowel transitions, which they will soon generalize to all all words. words.

To some degree, Goldstein's research seems to have unlocked the secret to learning to talk-he's just given eager parents a road map for how to fast-track their infants' language development. But Goldstein is very careful to warn parents against overdoing it. "Children need breaks for their brain to consolidate what it's learned," he points out. "Sometimes children just need play time, alone, where they can babble to themselves." He also cites a long trail of scholarship, back to B.F. Skinner, on how intermittent rewards are ultimately more powerful than constant rewards.

And lest any parent pull her infant out of day care in order to ensure he's being responded to enough, Goldstein says, "The mix of responses a baby gets in a high-quality day care is probably ideal."

Tamis-LeMonda also warns against overstimulation. Her moms weren't responding at that high rate all day. all day. "In my study, the mothers were told to sit down and play with their infant and these toys. But the same mom, when feeding the baby, might respond only thirty percent of the time. When the child is playing on the floor while the mom is cooking, it might be only ten percent. Reading books together, they'd have a very high response rate again." "In my study, the mothers were told to sit down and play with their infant and these toys. But the same mom, when feeding the baby, might respond only thirty percent of the time. When the child is playing on the floor while the mom is cooking, it might be only ten percent. Reading books together, they'd have a very high response rate again."

Goldstein has two other points of caution, for parents gung-ho on using his research to help their babies. His first concern is that a parent, keen to improve his response rate, might make the mistake of over-reinforcing less-resonant sounds when a baby is otherwise ready to progress, thereby slowing development. This would reward a baby for immature sounds, making it too easy for the baby to get attention. The extent to which parents, in a natural setting, should phase out responses to immature sounds, and become more selective in their response, is thus far unknown.

Goldstein's second clarification comes from a study he co-auth.o.r.ed with his partner at Cornell, Dr. Jennifer Schwade. As Goldstein's expertise is a tot's first year of life, Schwade's expertise is the second year, when children learn their first 300 words. One of the ways parents help infants is by doing what's called "object labeling"-telling them, "That's your stroller," "See the flower?," and "Look at the moon." Babies learn better from object-labeling when the parent waits for the baby's eyes to naturally be gazing at the object. The technique is especially powerful when the infant both gazes and and vocalizes, or gazes vocalizes, or gazes and and points. Ideally, the parent isn't intruding, or directing the child's attention-instead he's following the child's lead. When the parent times the label correctly, the child's brain a.s.sociates the sound with the object. points. Ideally, the parent isn't intruding, or directing the child's attention-instead he's following the child's lead. When the parent times the label correctly, the child's brain a.s.sociates the sound with the object.

Parents screw this up in two ways. First, they intrude rather than let the child show some curiosity and interest first. Second, they ignore what the child is looking at and instead take their cues from what they think the child was trying to say what they think the child was trying to say.

The baby, holding a spoon, might say "buh, buh," and the zealous parent thinks, "He just said 'bottle,' he wants his bottle," and echoes to the child, "Bottle? You want your bottle? I'll get you your bottle." Inadvertently, the parent just crisscrossed the baby, teaching him that a spoon is called "bottle." Some parents, in Goldstein and Schwade's research, make these mismatches of speech 30% of the time. "Beh" gets mistaken by parents as "bottle," "blanket," or "brother." "Deh" is interpreted as "Daddy" or "dog," "kih" as "kitty," and "ebb" as "apple." In fact, at nine months old, the baby may mean none of those-he's just making a canonical syllable.

Pretending the infant is saying words, when he can't yet, can really cause problems.

Proper object-labeling, when the infants were nine months, had an extremely strong positive correlation (81%) with the child's vocabulary six months later. Crisscrossed labeling-such as saying "bottle" when the baby was holding a spoon-had an extremely negative correlation with resulting vocabulary (68%). In real life terms, what did this mean? The mother in Schwade's study who was best at object labeling had a fifteen-month-old daughter who understood 246 words and produced 64 words. By contrast, the mother who crisscrossed her infant the most had a fifteen-month-old daughter who understood only 61 words and produced only 5. 5.

According to Schwade's research, object labeling is just one of any number of ways that adults scaffold language for toddlers. Again, these are things parents tend to do naturally, but not equally well. In this section, we'll cover five of those techniques.

For instance, when adults talk to young children about small objects, they frequently twist the object, or shake it, or move it around-usually synchronizing the movements to the singsong of parentese. This is called "motionese," and it's very helpful in teaching the name of the object. Moving the object helps attract the infant's attention, turning the moment into a multisensory experience. But the window to use motionese closes at fifteen months-by that age, children no longer need the extra motion, or benefit from it.

Just as multisensory inputs help, so does hearing language from multiple speakers.

University of Iowa researchers recently discovered that fourteen-month-old children failed to learn a novel word if they heard it spoken by a single person, even if the word was repeated many times. The fact that there was a word they were supposed to be learning just didn't seem to register. Then, instead of having the children listen to the same person speaking many times, they had kids listen to the word spoken by a variety of different people. The kids immediately learned the word. Hearing multiple speakers gave the children the opportunity to take in how the phonics were the same, even if the voices varied in pitch and speed. By hearing what was different, they learned what was the same.

A typical two-year-old child hears roughly 7,000 utterances a day. But those aren't 7,000 unique sayings, each one a challenge to decode. A lot of that language is already familiar to a child. In fact, 45% of utterances from mothers begin with one of these 17 words: what, that, it, you, are/aren't, I, do/don't, is, a, would, can/can't, where, there, who, come, look, and let's. what, that, it, you, are/aren't, I, do/don't, is, a, would, can/can't, where, there, who, come, look, and let's.

With a list of 156 two-and three-word combinations, scholars can account for the beginnings of two-thirds of the sentences mothers say to their children.

These predictably repeating word combinations-known as "frames"-become the spoken equivalent of highlighting a text. A child already knows the cadence and phonemes for most of the sentence-only a small part of what's said is entirely new.

So you might think kids need to acquire a certain number of words in their vocabulary before they learn any sort of grammar-but it's the exact opposite. Grammar teaches vocabulary.

One example: for years, scholars believed that children learned nouns before they learned verbs; it was a.s.sumed children learn names for objects before they can comprehend descriptions of actions. Then scholars went to Korea. Unlike European languages, Korean sentences often end with a verb, not a noun. Twenty-month-olds there with a vocabulary of fewer than 50 words knew more verbs than nouns. The first words the kids learned were the last ones usually spoken-because they heard them more clearly.

Until children are eighteen months old, they can't make out nouns located in the middle of a sentence. For instance, a toddler might know all of the words in the following sentence: "The princess put the toy under her chair." However, hearing that sentence, a toddler still won't be able to figure out what happened to the toy, because "toy" came mid-sentence.

The word frames become vital frames of reference. When a child hears, "Look at the ___," he quickly learns that ___ is a new thing to see. Whatever comes after "Don't" is something he should stop doing-even if he doesn't yet know the words "touch" or "light socket."

Without frames, a kid is just existing within a real-life version of Mad Libs Mad Libs-trying to plug the few words he recognizes into a context where they may or may not belong.

This key concept-using some repet.i.tion to highlight the variation-also applies to grammatical variation.

The cousin to frames are "variation sets." In a variation set, the context and meaning of the sentence remain constant over the course of a series series of sentences, but the vocabulary and grammatical structure change. For instance, a variation set would thus be: "Rachel, bring the book to Daddy. Bring him the book. Give it to Daddy. Thank you, Rachel-you gave Daddy the book." of sentences, but the vocabulary and grammatical structure change. For instance, a variation set would thus be: "Rachel, bring the book to Daddy. Bring him the book. Give it to Daddy. Thank you, Rachel-you gave Daddy the book."

In this way, Rachel learns that a "book" is also an "it," and that another word for "Daddy" is "him." That "bring" and "give" both involve moving an object. Grammatically, she heard the past tense of "give," that it's possible for nouns to switch from being subjects to being direct objects (and vice versa), and that verbs can be used as an instruction to act ("Give it") or a description of action taken ("You gave").

Variation sets are the expertise of a colleague of Schwade's at Cornell, Dr. Heidi Waterfall. Simply put, variation sets are really beneficial at teaching both syntax and words-and the greater the variations (in nouns, verbs, conjugation and placement) the better.

From motionese to variation sets-each element teaches a child what is signal and what is noise. But the benefits of knowing what to focus on and what to ignore can hardly be better ill.u.s.trated than by the research on "shape bias."

For many of the object nouns kids are trying to learn, the world offers really confusing examples. Common objects like trucks, dogs, telephones, and jackets come in every imaginable color and size and texture. As early as fifteen months old, kids learn to make sense of the world by keying off objects' commonality of shape, avoiding the distraction of other details. But some kids remain puzzled over what to focus on, and their lack of "shape bias" holds back their language spurt.

However, shape bias is teachable. In one experiment, Drs. Linda Smith and Larissa Samuelson had seventeen-month-old children come into the lab for seven weeks of "shape training." The sessions were incredibly minimal-each was just five minutes long and the kids learned to identify just four novel shapes ("This is a wug. Can you find the wug?"). That's all it took, but the effect was amazing. The children's vocabulary for object names skyrocketed 256%.

A nine-month-old child is typically-developing if he can speak even 1 word. With the benefit of proper scaffolding, he'll know 50 to 100 words within just a few months. By two, he will speak around 320 words; a couple months later-over 570. Then the floodgates open. By three, he'll likely be speaking in full sentences. By the time he's off to kindergarten, he may easily have a vocabulary of over 10,000 words.

It was one thing to learn about these scaffolding techniques from Goldstein and Schwade-but it was another thing to actually see their power in action.

Ashley had that chance shortly after we returned from Cornell, when she met her best friends, Glenn and Bonnie Summer, and their twelve-month-old daughter Jenna, for a casual dinner in Westwood, a shopping area in West Los Angeles. Ashley thinks of Jenna as her niece, and she had brought a tiny, red Cornell sweatshirt for the baby. During dinner, Ashley also couldn't help but try some of the scaffolding techniques on Jenna.

Every time Jenna looked at something, Ashley instantly labeled it for her. "Fan," Ashley p.r.o.nounced, when Jenna's gaze landed on the ceiling fan that beat the air. "Phone," she chimed, whenever Jenna's ears led her eyes to the pizza joint's wall-mounted telephone, ringing off the hook. Whenever Jenna babbled, Ashley immediately responded with a word or touch. Ashley clearly noticed the different babble stages in Jenna's chatter.

Jenna turned to her mother and made the baby sign gesture "More," tapping her fingertips together. She wanted another piece of the nectarine Bonnie had brought for her.

After giving the little girl the fruit, Bonnie complained: "It's the one baby sign she knows-a friend of mine taught it to her-and now I can't get Jenna to say 'More.' She used to try saying the word out loud, but now she only signs it. I hate it."

Ashley felt a little guilty; she too was messing with Jenna's language skills. But her guilt vanished when she realized that Jenna was babbling noticeably more than before. Jenna was looking straight at Ashley when she talked, using more consonant-vowel combinations, right on cue. Ash was ecstatic. There, in a Westwood dive, she and her niece had replicated Goldstein's findings, even down to the same fifteen-minute time frame.

Emboldened, Ashley asked Jenna's parents if she could try something. Jenna had about ten words in her spoken vocabulary-"milk," "book," "mama," and "bye bye," among others. But her parents had not yet been able to directly teach her a new word, on the spot. Since Goldstein's experiment had worked so well, Ashley decided to try Schwade's lesson on motionese. She took a small piece of the nectarine and danced it through the air, while saying, "Fr-uu-ii-t, Jen-na, fr-uu-ii-t." Jenna looked wide-eyed.

"Now, you do it," Ashley instructed Glenn and Bonnie.

"Froo-oooo-ooottt," Glenn said, bobbing the next piece of nectarine up and down. His attempt sounded more like a Halloween ghost than parentese. Ashley coached him-a little more singsong, a little more rhythm in the hand movement. Glenn tried it a second time: "Fro-ooo-oo-ttt." He set the nectarine chunk in front of Jenna.

"Oooot!!" piped Jenna, picking the piece up from the table.

Glenn started laughing, turned to Ashley and said, "I didn't think it was going to work quite that fast."

Ashley hadn't expected so either. Jenna kept repeating her new word until the baggie was empty. Needless to say, Jenna's parents were doing twice as much object-labeling and motionese by the time dinner was over. The next day, they used motionese to teach her "sock" and "shoe." Since then, they've increased their responsiveness to Jenna's babbles, and they've seen the difference.

In the 1950s and 1960s, Ma.s.sachusetts Inst.i.tute of Technology linguist Dr. Noam Chomsky altered the direction of social science with his theory of an innate Universal Grammar. He argued that what children hear and see and are taught, in combination, is just too fractured and pattern-defying to possibly explain how fast kids acquire language. The stream of input couldn't account for the output coming from kids' mouths. Chomsky highlighted the fact that young children can do far more than merely repeat sentences they've heard; without ever having been taught grammar, they can generate unique novel sentences with near-perfect grammar. Therefore, he deduced that infants must be born with "deep structure," some underlying sense of syntax and grammar.

By the 1980s, Chomsky was the most quoted living scholar in all of academia, and remained at the top through the millennium.

However, in the intervening decades, each step of language acquisition has been partially decoded and, in turn, dramatically demystified. Rather than language arising from some innate template, each step of language learning seems to be a function of auditory and visual inputs, contingent responses, and intuitive scaffolding, all of which steer the child's attention to the relevant pattern. Even Chomsky himself has been considering the import of the newly discovered mechanisms of language learning. In 2005, Chomsky and his colleagues wrote, somewhat cryptically: "Once [the faculty of language] is fractionated into component mechanisms (a crucial but difficult process) we enter a realm where specific mechanisms can be empirically interrogated at all levels.... We expect diverse answers as progress is made in this research program."

This doesn't rule out the possibility that some portion is still innate, but the portion left inexplicable-and therefore credited to innate grammar-is shrinking fast.