We Taught AI Our Language — Now It’s Teaching Us Back
There was a time when artificial intelligence was just a quiet observer—learning from us, studying how we speak, write, and express emotion. But somewhere along the way, the roles reversed. The student became the teacher. And now, for the first time, scientists have found hard evidence that AI is influencing how we communicate—not just online, but in how we actually speak.
A new study from the Max Planck Institute for Human Development has uncovered measurable linguistic shifts in real human conversations following the release of ChatGPT. Researchers analyzed over 740,000 hours of spoken communication—from YouTube academic talks and podcasts spanning science, education, business, and beyond—and found that after November 2022, the frequency of certain words preferred by ChatGPT began to surge in human speech.
Words like delve, comprehend, boast, swift, and meticulous—once relatively uncommon—spiked sharply after ChatGPT entered the public sphere. These are not just random coincidences. The study used rigorous causal inference techniques to show that this shift was statistically linked to ChatGPT’s release. In some cases, the use of these “AI-favored” words rose by 25% to 50% per year.
In other words, the machines we built to mimic our language are now subtly reshaping it.
It’s a remarkable twist in the story of communication. For centuries, every major medium—writing, printing, television, the internet—has transformed how humans share ideas. But generative AI represents something new: a feedback loop. These models are trained on human text, internalize our linguistic and cultural habits, and then feed their own learned patterns back into our conversations. The researchers call it a “closed cultural feedback loop”—a cycle where machines absorb human culture, then release it back into the world with their own stylistic imprint, which we in turn adopt.
And the influence isn’t confined to scripted or academic speech. The same study found traces of this linguistic contagion in spontaneous human dialogue—like podcasts and interviews, where people speak freely. Even in casual conversation, AI’s preferred turns of phrase are spreading. In science and technology podcasts, for instance, the word delve began appearing significantly more often after 2022, even though the speakers were not necessarily reading AI-generated text.
This suggests something deeper than imitation. We’re not just copying machine language—we’re internalizing it. The researchers note that this could stem from “cognitive ease”: we find AI-like phrasing familiar, fluent, and efficient, and so we adopt it unconsciously. Over time, these linguistic habits might even shape the way we think—a possibility linguists have long debated.
It’s easy to dismiss this as harmless—a few fancy words making their way into conversation—but the implications run deeper. Language doesn’t just reflect culture; it creates it. When certain patterns become dominant, others fade. The Max Planck team warns that as AI-driven language circulates globally, it could accelerate a kind of cultural homogenization, where diverse voices and regional styles are quietly smoothed into one algorithmic “standard.”
Future AI models, trained on data increasingly influenced by earlier AI systems, may amplify this effect—a recursive echo that risks flattening the richness of human expression. The study even raises the possibility of “model collapse,” where AI trained on AI-shaped data loses diversity and creativity, mirroring the linguistic sameness of the society that shaped it.
But it’s not all bleak. The authors also point out that this feedback loop isn’t purely destructive—it could also be a new form of co-evolution between humans and machines. Just as past technologies expanded our modes of thought, AI might enrich our language in ways we don’t yet understand. The challenge is to stay aware of how it’s happening.
Because every time we accept an autocomplete, tap a “smart reply,” or approve a suggested edit, we’re not just saving time—we’re giving the algorithm a small vote of confidence. And in doing so, we allow it to nudge our collective voice a little further in its direction.
We taught machines how to speak. Now they’re teaching us how to sound. The question isn’t whether that’s good or bad—it’s how consciously we participate in it. Language has always evolved through the tools we use. But this time, the tool listens back.
If we want to keep our voices human, we’ll have to remember to write—and speak—like it still matters.
