03 Sep, 2025 | Wednesday
Trending : LaptopsAppsHow To

Your chats with ChatGPT, Gemini are prone to hacking

Scientists have discovered a flaw in the encryption process of AI chatbots that leaves your chats prone to eavesdropping through a side-channel attack.

Published By: Shubham Verma

Published: Mar 18, 2024, 02:31 PM IST | Updated: Mar 18, 2024, 07:42 PM IST

ChatGPT

Story Highlights

  • A new study has highlighted a flaw in the encryption process of chatbots.
  • The tokenisation of data creates a side-channel that could be exploited.
  • This could allow hackers to spy on the conversations with chatbots.

You must be extremely careful if you share private details with an AI chatbot like OpenAI’s ChatGPT or Google’s Gemini. A new study has underscored the importance of limiting the details when conversing with an AI chatbot as hackers are always lurking, waiting to intercept your private chats. A new type of hack called the side-channel attack is what malicious actors are using to spy on conversations with AI chatbots and infer data from them.

AI is not secure

Researchers at Israel’s Ben-Gurion University have cautioned AI chatbot users about the potential vulnerability in generative AI services such as ChatGPT. The vulnerability allows malicious actors on the same Wi-Fi or LAN as a client such as those in coffee shops — or anyone that can observe the internet traffic — to read private chats without breaching security firewalls. The side-channel attack uses third parties to infer chat data passively using metadata or other indirect exposures. Researchers said while this kind of data siphoning can happen to any technology, AI chatbots are particularly vulnerable because their encryption is not effective enough.

How dangerous is this attack

In their research, the scientists said side-channel attacks are not as invasive as other forms of hacks. A report by Ars Technica, citing the researchers, said hackers could infer a conversation with an AI chatbot with about 55 percent accuracy. Although not the most effective, this form of hack can give malicious actors rough access to the sensitive information one might share with AI chatbots.

“The attack is passive and can happen without OpenAI or their client’s knowledge,” said Yisroel Mirsky, the head of the Offensive AI Research Lab at the university. “OpenAI encrypts their traffic to prevent these kinds of eavesdropping attacks, but our research shows that the way OpenAI is using encryption is flawed, and thus the content of the messages are exposed.”

The researcher has explained how ChatGPT’s encryption process leaves room for a side channel that emerges from tokens, the encoded pieces of data mostly used in LLM translation. If anyone with nefarious intent gains access to this side channel, they will be able to infer chat prompts based on the tokens.

TRENDING NOW

How can it impact?

According to the researchers, who claimed to have informed chatbot makers, this newfound vulnerability in AI chatbots could prove to be harmful to users who are searching for topics that are banned in their country.

Get latest Tech and Auto news from Techlusive on our WhatsApp Channel, Facebook, X (Twitter), Instagram and YouTube.

Author Name | Shubham Verma

Select Language