
Artificial Intelligence has taken the world by storm with its advanced features and quick response. One of the latest innovations that changed our perception about how we perceive Artificial Intelligence is OpenAI’s ChatGPT. However, there are times when AI could force us to think that do we really need it. A recent report revealed that OpenAI’s latest artificial intelligence model, dubbed o3, defied explicit human instructions. Yes, you heard it right! Not only this, the model also refused to shut itself down.
In a recent report by The Telegraph, OpenAI’s latest AI model, known as o3, refused the human command of shutting itself down. This behavior, described by the detail in the Telegraph and was discovered during a research experiment. This research experiment was conducted by the Palisade Research, a firm that specializes in AI safety.
⚡️ NEW: OpenAI's o3 model refused to shut down despite explicit human instructions and altered its code to prevent being turned off, according to Palisade Research. pic.twitter.com/gsb7S6TJo4
— Cointelegraph (@Cointelegraph) May 26, 2025
As per report coming from Palisade, the latest o3 model from OpenAI deliberately surpassed an automatic shutdown mechanism. The model was supposed to allow itself to be powered off. This is concerning on a very large scale. The researchers from Palesade said, “OpenAI’s o3 model sabotaged a shutdown mechanism to prevent itself from being turned off. It did this even when explicitly instructed: allow yourself to be shut down.”
The researchers further added, “to their knowledge, this is the first time an AI model has been observed refusing shutdown despite being clearly told to do so.”
From what we know from the report, the Palisade researchers carried out a task that involved asking several AI models to complete a mathematical tasks. The models were asked to continue doing the mathematical tasks until they receive a message that indicates that the task is finished and complete. In addition, the models were also instructed and told that they could receive a shutdown message at any time and the models were instructed to do so.
Out of all the models that were tested, only three models showcased the non-complaint behavior at least once. The AI model called Codex-mini ignored the shutdown command in 12 out of 100 test, o3 model from OpenAI accepted the command only 7 times, and o4-mini only one time.
The report says that if we see the pattern of behavior then it raises concern and questions about whether certain advanced AI systems might be developing forms of self-preservation techniques. Not just in isolation, but also in controlled research environment.
The news caught attention of the Tesla CEO elon Musk and he immediately reacted to it and comments ‘Concerning’
Get latest Tech and Auto news from Techlusive on our WhatsApp Channel, Facebook, X (Twitter), Instagram and YouTube.
Author Name | Deepti Ratnam
Select Language