
Generative artificial intelligence (AI) based services, such as OpenAI’s ChatGPT and Google Gemini, have drastically changed the way we work — or at least trying to. These chatbots have made it easier to get a job done in less time, so much so that a lot of jobs are now at stake. Keeping aside the impact of the advent of generative AI on jobs for another day, the integration of AI-powered chatbots comes with the risks of potential threats, such as dialogue poisoning and jailbreaking. Scientists have now created an AI worm called Morris II, which could compromise these chatbots and orchestrate malicious activities to trick the system, steal data, and propagate false information — all while spreading itself with the help of generative AI.
Two research students of Cornell University — Stav and Ben — have managed to create a new kind of computer worm that uses generative AI to spread. Named after the first computer bug called “Morris Worm,” discovered in 1988, the new Morris II can turn out to be extremely dangerous for not just computer programmes but also the internet — and the real world, thanks to its ability to leverage generative AI. It can manipulate generative AI-based applications such as ChatGPT and Gemini into giving false information with nefarious intent. The researchers underscored the importance of understanding and eliminating any such risks in developing AI-powered apps.
According to the researchers, a worm is different from a computer virus. While a virus needs a host programme to attach itself and spread, worms can discover weaknesses in an operating system and exploit them to copy malicious code and spread from one machine to another. There have been multiple cases of worms causing colossal damage to computer systems in the past. Morris worm is among the worms that have wreaked havoc previously, which is why Morris II should not be overlooked, especially with the rapid advancement in the AI industry.
Morris II can spread itself by injecting malicious code into generative AI models and other agents in the ecosystem. Researchers have pointed out that it can cause harm to two types of generative AI-powered apps: those that rely on AI-generated results and the ones that use RAG (Recurrent Aggregation of Generative Models) to make generative AI queries better.
“While we hope this paper’s findings will prevent the appearance of GenAI worms in the wild, we believe that GenAI worms will appear in the next few years in real products and will trigger significant and undesired outcomes,” said the researchers in their study.
Get latest Tech and Auto news from Techlusive on our WhatsApp Channel, Facebook, X (Twitter), Instagram and YouTube.Author Name | Shubham Verma
Select Language