Written By Divya
Published By: Divya | Published: Mar 13, 2026, 07:57 PM (IST)
Study Finds AI Chatbots Could Be Standardizing Human Writing and Ideas
Almost all the netizens now know about AI, thanks to famous tools like ChatGPT, Grok, and Gemini. These AI tools are now part of our day-to-day life and work. Many people use them to draft emails, refine articles, or simply make their writing sound better. But researchers now say this growing dependence on AI might also be changing how humans express themselves. Also Read: OpenAI may bring Sora AI video generator directly to ChatGPT
A new research paper published in the Cell Press journal Trends in Cognitive Sciences suggests that the increasing use of large language models (LLMs) could slowly make the way people write and think more similar. In simple terms, if millions of people rely on the same AI systems for help, their words, ideas, and reasoning styles may start to look alike. Also Read: Meta introduced new 'Anti-Scam' tools on WhatsApp, Facebook and Messenger
The study comes from researchers at the University of Southern California and was published in the journal Trends in Cognitive Sciences. The team looked at more than 130 previous studies from fields such as psychology, linguistics, and computer science to understand how AI tools influence human behaviour and creativity. Also Read: Perplexity launches ‘Personal Computer’ AI agent for Mac mini
Their findings suggest that while AI models are trained on huge amounts of human-created data, the responses they generate are often less diverse than real human thinking.
That’s because these systems are built to recognise patterns in data and produce answers that are statistically most likely. As a result, they often repeat familiar structures and common viewpoints rather than exploring unusual or unexpected ideas.
Another reason, researchers say, is the type of data used to train many AI systems. Large language models often rely heavily on data from dominant languages and cultures, especially from Western countries.
Because of this, their outputs may reflect a limited range of perspectives. Computer scientist Zhivar Sourati, one of the authors of the paper, explained that when people use the same AI tools to polish their writing, their personal style can slowly disappear.
For example, someone might rewrite a sentence simply because the AI suggests a “better” version. Over time, those suggestions can make different people’s writing sound more alike.
The researchers say the concern goes beyond writing style. If people start relying heavily on AI suggestions, it could slowly shape what society considers to be the “right” way to express ideas or reason through problems.
Over time, that could reduce what experts call cognitive diversity, the natural differences in how people think, communicate, and solve problems.
To prevent that, the researchers say AI developers should train models on more diverse languages, cultures, and viewpoints. That way, AI tools can help people think better without making everyone sound the same.