Why goblins and gremlins suddenly showed up in ChatGPT replies

ChatGPT users spotted repeated goblin references in replies. OpenAI later explained it was linked to a personality setting and training patterns.

Published By: Shubham Arora | Published: May 02, 2026, 09:33 AM (IST)

If you've been using ChatGPT lately, you might have caught something a bit strange. Out of nowhere, it started mentioning goblins, gremlins, and similar stuff in replies. Not every time, but enough that you'd pause and notice it. Also Read: ChatGPT Images 2.0 sees massive growth in India: Here’s why users are loving it

At first, it just felt like something was off. Like a random glitch or a weird reply here and there. But once more users started pointing it out, it was clear this wasn't happening randomly. Even OpenAI acknowledged the pattern and looked into what was causing it. Also Read: Elon Musk reveals xAI relied on OpenAI models “partly” to train Grok

When people started noticing it

This started becoming noticeable after a model update, when responses began to feel a bit more playful than usual. Along with that, certain words kept showing up more often than expected. Also Read: India seeks access to Anthropic’s Mythos AI, but US hesitates to share: Here’s why

According to OpenAI's own findings, mentions of "goblin" went up by around 175 percent, while "gremlin" also saw a noticeable jump. These weren't huge numbers overall, but the repetition made it obvious.

What made it stand out was where these words were showing up. Sometimes it made sense, but other times it didn't really fit the context at all.

It wasn't a bug

The first assumption was that something had gone wrong technically. But the explanation turned out to be different.

This behaviour was linked to a personality setting called "Nerdy". The idea behind this mode was to make responses feel more playful and less serious. To do that, the system encouraged creative language and metaphors.

During training, responses that included quirky elements like creatures ended up getting slightly better feedback. Over time, the model started picking up that pattern.

So instead of being a mistake, it was something the system learned on its own.

How it spread everywhere

What's interesting is that this didn't stay limited to that one personality mode. Even outside of it, similar responses started appearing.

This happens because of how these models are trained. Once the model picks up a certain way of responding during training, it can start repeating it in other places too, even when it doesn't really fit.

OpenAI found that most of these "goblin" references were coming from the Nerdy setting, even though that mode wasn't used all the time. The behaviour slowly carried over into general responses.

What was changed

Once the pattern became clear, OpenAI made a few changes. The Nerdy personality was removed, and the training signals that encouraged this kind of language were adjusted.

Similar patterns were filtered out from the training data so these references don't appear unnecessarily. Even then, some traces continued to show up in later versions, mainly because the training process had already moved ahead before the issue was fully understood.

Why this stood out

This wasn't a major problem, but it did highlight how small changes during training can affect how the model responds. Something as simple as encouraging a slightly playful tone ended up becoming a repeated pattern. And once the model got into that habit, it didn't go away immediately.

It also explains why sometimes responses feel slightly different after updates, even if nothing obvious has changed.

Get latest Tech and Auto news from Techlusive on our WhatsApp Channel, Facebook, X (Twitter), Instagram and YouTube.