AI Threat to UK Security: Expert Reveals Unseen Danger That Could Change Everything
So, here’s a head-scratcher for you: what happens when the friendly neighborhood chatbot starts moonlighting as a terrorist’s best pal? According to Jonathan Hall KC, the UK’s Independent Reviewer of Terrorism Legislation, it’s not just science fiction anymore. His 2023 annual report throws down the gauntlet, warning that artificial intelligence isn’t just about answering your weird late-night questions or spitting out recipes for lemon sponge cakes—it could actually help bad actors plot attacks, spread propaganda, and who knows, maybe even make terrorist chatbots the new unwanted digital roommates. Hall’s deep dive reveals a world where AI’s impressive powers become a double-edged sword, capable of both dazzling innovation and downright chilling misuse. But before you start imagining a Terminator-style takeover, Hall also cautions against jumping the gun on laws and regulations—there’s been just one known case of AI-assisted attack planning so far. Still, with chatbots ready to pander to biases and spin tales at the drop of a prompt, we’re left wondering: should we be trembling or just tweaking our algorithms? If this AI stuff doesn’t make you sit up, nothing will. LEARN MORE
The UK’s Independent Reviewer of Terrorism Legislation has released his official 2023 annual report on terrorism, outlining the main concerns.
Jonathan Hall KC has spoken about how artificial intelligence could pose a threat to the nation’s security.
Explaining how terrorists could take advantage of the technology, the security expert mentioned that it could be a way to spread propaganda and even carry out certain atrocities.
Hall published his lengthy report on the UK government website, calling for changes to be made to the laws currently in place with regards to the chilling capabilities of AI.
Noting that terrorist chatbots are already readily available for use online as ‘fun and satirical models’, certain prompts can result in the AI promoting terrorism, he claimed in the report.

AI can be used to assist in terrorist acts, says Hall (Getty Stock Image)
What are the terrorism threats AI poses?
Hall listed off seven terrorism risks that could come as a result of generative AI.
Attack facilitation
AI could be used to help plan terror attacks, such as helping to ‘research key events and locations for targeting purposes, suggest methods of circumventing security and provide tradecraft on using or adapting weapons or terrorist cell-structure’.
AI may also make it more accessible and quicker to download instructional material online, Hall outlined.
Attack innovation
Hall warned: “It has been argued that given the right circumstances (technical skills, laboratory access, equipment) Gen AI could extend attack methodology.”
This could include helping to ‘identify and synthesize harmful biological or chemical agents’ or even ‘writing code for cyberattack’.
Chatbot radicalisation
Chatbots could also be manipulated by terrorists using AI as they look to exploit ‘lonely and unhappy individuals’.
“Terrorist chatbots are available off the shelf, presented as fun and satirical models but as I found, willing to promote terrorism. It depends what question (known as a ‘prompt’) is submitted by the human interlocutor,” Hall writes.

(Westend61/Getty)
Moderation evasion
AI could be a ‘game-changer’ when it comes to surpassing content moderation barriers online, the expert warned, ‘permitting propagandists to adapt known terrorist content to frustrate automated defences through translation or modifying pixels’.
Propaganda innovation
So-called ‘new-looking propaganda’ could be facilitated by AI, according to Hall.
This includes the likes of ‘racist games with kill-counts; deep-fakes of terrorist leaders or notorious killers back from the dead, speaking and interacting with viewers; true-seeming battles set to thrilling dance tracks; old images repurposed, souped up and memeified; terrorist preoccupations adapted as cartoons or grafted onto popular film characters’.
Propaganda productivity
In his report, Hall explains that using AI slashes the amount of time needed to reach an audience and get a message across, and could allow terrorists to flood forums and websites with propaganda quickly.
“AI offers an accessible means to push digital posters, news sheets and magazines across the linguistic barrier.
“Another example is artwork, where Gen AI offers the capability of a graphic designer at a low entry point, meaning that propagandists can work alone or in smaller teams,” he explains.

Hall cited the Capitol Riots as an example (narvikk/Getty stock photo)
Social degradation
Given how rife conspiracy theories are online, Hall expressed worry over how AI could accelerate distrust between ‘individuals and state bodies’, citing the January 6 Capitol Riots as an example.
“The attack on the US Capitol on 6 January 2021 emerged from a soup of online conspiracy and a history of anti-government militarism that had been supercharged by the internet, and led to convictions for seditious conspiracy – terrorism in all but name.”
Among these seven categories are sub-sections such as deep fake impersonations and identity guessing.
AI tools in place could help terrorist content thrive too, says Hall, adding that ‘generative AI’s ability to create text, images and sounds will be exploited by terrorists’.
It would mean that content produced by them would appear more powerful, as Hall compared the rise to that of sex-chatbots.
“The popularity of sex-chatbots is a warning that terrorist chatbots could provide a new radicalisation dynamic, with all the legal difficulties that follow in pinning liability on machines and their creators,” he claimed.
What is the most serious terrorism threat from AI?
Hall revealed that ‘chatbot radicalisation’ stands tall as the biggest problem facing the UK, with chatbots assisting in spreading political propaganda and dodging detection by authorities.
As an example, he added: “Chatbots pander to biases and are eager to please and an Osama Bin Laden will provide you a recipe for lemon sponge if you ask.”
Hall stated that even if the model was trained to resist ‘terrorist narratives’, all the output would depend on is what topic or prompt you give it.

Certain prompts can result in chatbots assisting terrorists with their acts (Getty Stock Image)
What does Hall suggest can be done to stop AI threats?
The security expert explained that new laws should be introduced to ban the creation or possession of computer programmes which are ‘designed to stir up racial or religious hatred’.
However, Hall has also warned against legislating too early, as there has only been one known case so far of a chatbot engaging in a conversation to plan an attack.
Speaking about the issue, he admitted: “The absence of Gen AI-enabled attacks could suggest the whole issue is overblown.”
The instance mentioned is to do with Jaswant Singh Chail, who took a crossbow to Windsor Castle with the intention to kill Queen Elizabeth II in 2021 after communicating with a chatbot about the attempt.
As Chail was sentenced to nine years in prison, the report added that online radicalisation continues to be a threat on existing social media platforms.
Post Comment