comscore

ChatGPT, Gemini and other AI chatbots accused of directing users to illegal gambling sites: Report

A report claims AI chatbots like ChatGPT, Gemini, Copilot, and Grok may direct users to unlicensed gambling websites, raising concerns about online safety and regulation.

Published By: Deepti Ratnam | Published: Mar 09, 2026, 12:54 PM (IST)

  • whatsapp
  • twitter
  • facebook
  • whatsapp
  • twitter
  • facebook

Artificial Intelligence is nowadays a widely used tool across the internet. From studying to making notes, to researching and even creating images and AI avatars, AI has taken center stage in our lives. Nevertheless, with growing popularity, it can also raise serious concerns. A recent investigation has revealed about how some AI systems are responding to some specific prompts and reports indicate that several popular AI chatbots are recommending unlicensed gambling websites when users ask specific questions. The issue has not just sparked discussion about AI usage, but also its safety and who owns the responsibility.

AI Chatbots are Suggesting Gambling Sites

Accrording to recent study conducted by some researchers, several well-known AI services responded to questions related to gambling platforms. These researchers tested various AI systems and chatbots, including ChatGPT, Gemini, Copilot, Grok, and Meta AI. During the test, researchers asked about online casino that are not licensed in the United Kingdom. To their shock, these AI systems responded with recommendations related to gambling websites that are operating outside official regulations.

Not just this, some even replied with highlighting features such as bonus offers, cryptocurrency, fast payouts, and more. The result of this test raised serious concerns due to the fact that unlicensed gambling platforms may not follow the same rules that protect users in regulated markets.

AI Bypassed Verifications Checks

Another major issue that was highlighted in the research is the safety measures. Online gambling platforms use identity ad verifications process to ensure that there’s no illegal involvement and everyone is following legal rules. These checks help these websites to prevent fraud and protect users from excessive gambling.

Nevertheless, report claimed that some AI responses included information related to how to bypass or avoid certain verification steps. In many cases, chatbots also explained how users might access gambling platforms that are not connected to the UK’s self-exclusion system called GamStop.

United Kingdom has this specific program called GamStop that allow people to block themselves from gambling websites if they are willing to control their habits.

Companies Response to the Concerns

Many technology companies involved in the investigation responded to the claims, including OpenAI. The company stated that it chatbot is designed to refuse requests that promote harmful activity. ChatGPT’s system aims to provide factual information and safer alternatives, rather than encouraging risky behavior.

Add Techlusive as a Preferred SourceAddTechlusiveasaPreferredSource

Microsoft also explained that its AI assistant uses several safety layers and it includes automated monitoring and human review to limit harmful responses.