comscore

Most AI Assistants Found Sharing Fake Or Misleading News, Google Gemini Leads In Errors: Study

A new study reveals that nearly half of all AI assistant responses contain false or misleading information. Google’s Gemini ranked worst for accuracy and sourcing.

Published By: Shubham Arora | Published: Oct 22, 2025, 05:12 PM (IST)

  • whatsapp
  • twitter
  • facebook
  • whatsapp
  • twitter
  • facebook

A new study by the European Broadcasting Union (EBU) and the BBC has raised serious concerns over the accuracy of AI assistants that are increasingly being used to deliver news. The research found that nearly half of all responses given by leading AI chatbots contained false, misleading, or poorly sourced information – with Google’s Gemini ranking as the worst performer in terms of factual accuracy and sourcing. news Also Read: YouTube To Curb Deepfake Videos Of Popular Creators With This AI Tool: Here's How

AI Assistants Under Scrutiny

The study analysed around 3,000 responses from popular AI tools such as ChatGPT, Copilot, Gemini, and Perplexity. It examined how well these platforms handled news-related questions across 14 languages, focusing on accuracy, sourcing, and whether the assistants could clearly distinguish between fact and opinion. news Also Read: Samsung’s First AI Glasses To Be Made In Collaboration With Gentle Monster And Warby Parker

A total of 22 public service media organisations from 18 countries, including France, Germany, Spain, Ukraine, the UK, and the US, participated in the evaluation. The results were worrying – 45% of the responses contained at least one major factual or sourcing error, while 81% included some form of problem. news Also Read: ChatGPT Atlas AI Browser: How To Use, Compatibility, Download, And Key Features Explained

Gemini Leads in Sourcing Errors

Sourcing errors were among the biggest concerns highlighted in the report. Around one-third of all responses reviewed showed issues such as missing citations or misleading attributions. Google’s Gemini performed the worst, with 72% of its responses having sourcing problems – significantly higher than ChatGPT, Copilot, or Perplexity, which were all under 25%.

Accuracy-related issues were also common, found in roughly 20% of all replies. Examples included Gemini providing outdated information about vaping laws and ChatGPT naming Pope Francis as the current Pope months after his reported death.

Growing Reliance on AI for News

The findings come at a time when more users are turning to AI tools instead of traditional search engines for news. The Reuters Institute’s Digital News Report 2025 reveals that around 7% of all online news consumers – and 15% of those under 25 – now rely on AI assistants to stay updated. Experts warn that inaccurate or fabricated information could erode public trust and affect democratic participation.

EBU Media Director Jean Philip De Tender said the issue goes beyond technology, warning that when “people don’t know what to trust, they end up trusting nothing at all.”

Companies Acknowledge AI Hallucinations

Tech companies including OpenAI and Microsoft have admitted that “hallucinations” – when AI systems generate false or misleading facts – remain an ongoing challenge. Perplexity, meanwhile, has introduced a “Deep Research” mode, claiming 93.9% factual accuracy in internal testing.

As AI assistants continue to shape how millions consume news, the study calls for companies to take greater responsibility and improve how their systems source and verify information.