We all know that parents talk differently to babies, speaking in what’s known as “motherese” to linguists—or “baby talk” to the rest of us. Parents may feel a little silly using baby talk, but they shouldn’t. Research shows that babies not only prefer listening to this exaggerated speech but that it also helps babies learn new words more easily. “By highlighting the acoustic structure of speech, ‘motherese’ helps babies translate a torrent of sound into meaningful units of language,” explains Elise Piazza, a Postdoctoral Research Scholar at Princeton University. Elise studies the acoustic aspects of language and communication. She got into the field by chance after hearing a radio interview with Diana Deutsch, a music perception researcher. She was immediately enthralled by the idea of applying a scientific lens to music and language and went on to study perception at Williams College and UC Berkeley. She’s currently looking at how children learn to detect structure in the sounds around them during early language acquisition.
Although scientists know a lot about how we alter rhythm and pitch in infant-directed speech, they know much less about the role of timbre. Timbre is a complex acoustic feature that helps us distinguish the unique “flavors” of sounds around us. “When an orchestra tunes up, all instruments play the same pitch but we can still hear their distinct textures or timbres: breathy woodwinds, buzzy brass, and mellow strings,” explains Elise. Timbre helps us immediately distinguish different sounds sources and identify individual people, animals, and objects based on sound alone. Elise and her colleagues wondered if mothers might unconsciously change their timbre, altering their overall vocal fingerprints when talking to their babies.
In the Princeton Baby Lab, they recorded 24 mothers playing with and reading to their seven- to twelve-month-old infants and speaking to an adult researcher. Half the participants were English speakers and half were not, but both groups spoke to their babies and the researchers in their native language. Elise and her colleagues then quantified the timbre fingerprint of each mother’s speech to both her baby and the adult researcher using a concise, time-compressed measure of her vocal spectrum. They found that the profiles of adult-directed and infant-directed speech had consistently different timbre fingerprints. In fact, a machine learning algorithm could reliably distinguish between adult- and infant-directed speech from just one second of speech.
But the most surprising thing that they found was that this consistent pattern of timbre shifting happened across all 10 different languages the participants spoke. In addition to English, they analyzed Spanish, Russian, Polish, Hungarian, German, French, Hebrew, Mandarin, and Cantonese. “Our classification algorithm, when trained on English data alone, could immediately distinguish adult-directed from infant-directed speech in a test set of non-English recordings (and vice versa when trained on non-English data),” added Elise. She predicts that their finding would generalize quite well to fathers and non-parent adults speaking to infants. It may even be universal.
Identifying characteristics of baby talk across multiple languages could give us rich information about the amount and type of language babies and children are exposed to across different cultural environments. It could help researchers and educators improve educational outcomes such as increased vocabulary and success in school later in life. “Our framework could also lead to new research avenues on how speakers adjust their timbre during various forms of “code switching” to accommodate different audiences, such as friends, bosses, political constituents, and romantic partners,” adds Elise.
Continue reading