08 Sep, 2025 | Monday
Trending : LaptopsAppsHow To

OpenAI Admits GPT-5 Hallucinates: Why Even Advanced AI Can Give Confidently Wrong Answers

OpenAI admits GPT-5 can hallucinate, producing confidently wrong answers. Learn why even advanced AI models make mistakes and how future updates aim to improve accuracy.

Published By: Deepti Ratnam

Published: Sep 08, 2025, 12:35 PM IST | Updated: Sep 08, 2025, 04:08 PM IST

OpenAI

Artificial Intelligence keeps on transforming lives with its advanced language model and capabilities of performing efficient tasks. Nevertheless, even the most advanced systems are not infallible. OpenAI, in its recent post, admits that GPT-5, its advanced and latest language model, experiences ‘Hallucinations.’

Here’s What Hallucinations Mean in GPT-5 Context

By Hallucinations, OpenAI means that its language model GPT-5 is producing statements that sound plausible but are factually incorrect. The company published a blog post detailing why this issue persists and is affecting the AI’s reliability.

GPT-5 Still Hallucinates

The AI tech giant says that ‘Hallucinations’ occur when a language model generates information that appears to be credible but is false. The company says-

“ChatGPT also hallucinates. GPT‑5 has significantly fewer hallucinations, especially when reasoning⁠, but they still occur. Hallucinations remain a fundamental challenge for all large language models, but we are working hard to further reduce them.”

OpenAI explained that even simple questions can trigger these errors. For example, when asked about an author’s dissertation title or birth date, earlier models sometimes offered multiple, conflicting answers. This demonstrates that AI can confidently present incorrect information, leading to potential misunderstandings.

Why GPT-5 is Hallucinating

OpenAI says that one of the striking reasons for GPT-5 to still hallucinate is how AI models are trained and evaluated. Current benchmarks in AI provide answers, even if the model is unsure about whether its answers are correct or wrong. Rather than acknowledging uncertainty, it simply answers.

“Think about it like a multiple-choice test. If you do not know the answer but take a wild guess, you might get lucky and be right. Leaving it blank guarantees a zero. In the same way, when models are graded only on accuracy, the percentage of questions they get exactly right, they are encouraged to guess rather than say “I don’t know,” says OpenAI

The tech giant further notes that this creates an incentive for models to guess, and hence, sometimes results in confidently wrong outputs.

The main reason AI makes hallucinations is how it’s trained. Language models learn by guessing the next word from huge amounts of text, without checking if the information is true. While they learn grammar and spelling well, unusual facts like birthdays or specific titles are harder to get right. This is why even GPT-5 can sometimes give wrong answers with confidence.

TRENDING NOW

 

Get latest Tech and Auto news from Techlusive on our WhatsApp Channel, Facebook, X (Twitter), Instagram and YouTube.

Author Name | Deepti Ratnam

Select Language