Written By Deepti Ratnam
Edited By: Deepti Ratnam | Published By: Deepti Ratnam | Published: Feb 24, 2026, 10:03 AM (IST)
The artificial intelligence competition is becoming more challenging and cutthroat over the years. Companies like OpenAI, Anthropic, X, and more have added pressure to an already competitive environment. In this regard, recent claims made by Anthropic have started a new discussion. The company has asserted that a number of Chinese AI firms had abused its Claude chatbot to enhance their systems. The matter has also brought a backlash on the part of Elon Musk who contributed more attention to the dispute.
Anthropic confirmed that DeepSeek, Moonshot AI, and MiniMax had done large scale distillation attacks against its Claude model. The company claims that these companies falsely made more than 24,000 accounts. It is through these accounts that they created over 16 million interactions with Claude.
Anthropic says that the aim was to draw out knowledge out of its system. The company feels that the responses gathered were utilized to train and empower rival AI models. It has termed such actions as industrial scale activities. The company also cautioned that such campaigns are becoming more complex and fast.
We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax.
These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.
— Anthropic (@AnthropicAI) February 23, 2026
Distillation in artificial intelligence is a known technique. It consists of training a smaller model on the basis of output of a more advanced model. The approach will aid in cost cutting and enhance efficiency. Distillation has been used legally in many AI labs and it is part of their systems.
The controversy is that one of the companies is said to be using the model owned by another company on a large scale without their consent. In that regard, distillation can resemble copying, rather than optimization. Anthropic cites that the magnitude of activity is an indication of an intentional attempt to mine the principal capabilities which include skills of coding, ability to reason, and abilities of tool usage.
Elon Musk responded strongly to the accusations of Anthropic. In a X-post, Musk alleged that Anthropic had used stolen training data in the past. He also cited legal resolutions on the use of data. Musk doubted the validity of the company and its stance in the argument.
His remarks changed the focus to more general issues related to data sourcing practices throughout the whole AI sector.
Anthropic is guilty of stealing training data at massive scale and has had to pay multi-billion dollar settlements for their theft. This is just a fact. https://t.co/EEtdsJQ1Op
— Elon Musk (@elonmusk) February 23, 2026
The allegations came at a sensitive time in global technology policy worldwide. The US has lately lifted some of the restrictions in exporting chips to china. The advanced chips are useful in the training and execution of large AI models.
Anthropic states that the capability to execute a large scale extraction attack can also be diminished through the limitation of access to the chip. Security was another issue that was raised by the company. It noted that models that were not constructed with appropriate safety guardrails might bring about more misuse risks.
The controversy shows the increasing tension in the competition of AI across the world and poses questions about fairness, regulation, and security in the development of artificial intelligence.