28 Aug, 2023 | Monday
Trending : Mobile PhonesLaptopsAppsTop DealsAIOPPO India

U.K. top defense officials worry about AI creating teenage terrorists

Authorities want to catch up with AI developments before things get out of hand.

Edited By: Manik Berry

Published: Jun 04, 2023, 01:52 PM IST

AI terrorist featured image
AI terrorist featured image

Story Highlights

  • MI5 and Alan Turing Institute have come together to raise concern about AI terrorism.
  • Even general users can make AI like ChatGPT go rogue with a couple of simple commands.
  • U.K. Prime Minister Rishi Sunak is likely to raise this concern with the U.S. President Joe Biden.

Artificial intelligence is a double-edged sword that could do as much harm as it can do good. The U.K.’s MI5 and the Alan Turing Institute have come together to raise concerns about AI threatening national security. Top officials say that AI creators and designers need to proactively keep potential terrorist misuses in mind when designing a program.

Jonathan Hall KC, one of the panel members said “Too much AI development focused on the potential positives of the technology while neglecting to consider how terrorists might use it to carry out attacks”. He is also concerned about AI chatbots being able to manipulate already vulnerable people into committing terrorism.

AI chatbots creating terrorists

ChatGPT jailbreak DAN

You can turn ChatGPT into a bully by jailbreaking it.

According to a report from The Guardian, AI’s ability to groom children into terrorism is a growing problem. Experts also warn about AI advancing so far as to threaten human survival. The report also says that U.K.’s Prime Minister, Rishi Sunak, will raise the issue with U.S. Presiden Joe Biden on his travel to the U.S.

Companies like Microsoft, OpenAI, and Google have their own set of responsible AI principles. However, it seems that there are simple ways to bypass these. For instance, you can enter a couple of simple commands and turn ChatGPT into a bullying tool. If everyday users can do this, then trained terrorists can certainly mobilise it in more creative ways to influence young minds.

Talking about AI content and data moderation, Hall added, “How many are actually involved when they say they’ve got guardrails in place? Who is checking the guardrails? If you’ve got a two-man company, how much time are they devoting to public safety? Probably little or nothing”.

The Guardian also mentions a recent case of nineteen-year-old Mathew King, who has been jailed for life for plotting a terror attack. King was influenced and radicalized after spending time online.

However, the authorities don’t fear terrorists misusing AI as much as a rogue AI terrorist. Something like a jailbroken ChatGPT or an ill-regulated tool can persuade people to commit an act of terror. Governments around the world are already working on responsible AI principles.

For instance, here is a responsible AI research by India’s NITI Aayog. However, AI developments are happening so fast, that regulatory bodies are yet to catch up with them.

Author Name | Manik Berry


Select Language