The rising threat of AI weaponisation in cybersecurity

AI's accelerated role in creating cyber threats necessitates new security measures.

This week, Anthropic revealed a concerning development - hackers have weaponised its technology for a series of sophisticated cyber-attacks. With artificial intelligence (AI) now playing a critical role in coding, the time required to exploit cybersecurity vulnerabilities is diminishing at an alarming pace.

Kevin Curran, IEEE senior member and cybersecurity professor at Ulster University, highlights the methods attackers employ when using large language models (LLMs) to uncover flaws and expedite attacks. He emphasises the need for organisations to partner robust security practices with AI-specific policies amidst this changing landscape.

Curran explains, "This shows just how quickly AI is changing the threat landscape. It is already speeding up the process of turning proof-of-concepts – often shared for research or testing – into weaponised tools, shrinking the gap between disclosure and attack. An attacker could take a PoC exploit from GitHub, feed it into a large language model and quickly get suggestions on how to improve it, adapt it to avoid detection or customise it for a specific environment. That becomes particularly dangerous when the flaw is in widely used software, where PoCs are public but many systems are still unpatched."

“We’re already seeing hackers use LLMs to identify weaknesses and refine exploits by automating tasks like code completion, bug hunting or even generating malicious payloads designed for particular systems. They can describe malicious behaviour in plain language and receive working scripts in return. While this activity is monitored and blocked on many legitimate platforms, determined attackers can bypass safeguards, for example by running local models without restrictions.

Curran concludes, "The bigger issue is accessibility. Innovation has made it easier than ever to create and adapt software, which means even relatively low-skilled actors can now launch sophisticated attacks. At the same time, we might see nation-states using generative AI for disinformation, information warfare and advanced persistent threats. That’s why security strategies can’t just rely on traditional controls. Organisations need AI-specific defences, clear policy frameworks and strong human oversight to avoid becoming dependent on the same technology that adversaries are learning to weaponise."

As AI continues to evolve, so too does its potential for misuse in cyber arenas. This calls for innovative solutions and strategic thinking to counteract its possible threats, ensuring digital realms remain secure.

Three key trends in the sensor market from CES 2026: the rise of physical AI, renewed industrial...
Emerge research finds that AI investments are now under stricter timelines, compelling leaders to...
The International AI Safety Report advocates for strengthened AI governance and highlights...
Red Hat collaborates with the UK MOD to provide centralised cloud-native platforms aimed at...
Apptio's 2026 Technology Investment Management Report finds that organisations manage increasing...
Mistral AI partners with EcoDataCenter for an AI-focused data centre in Sweden, ensuring Europe's...
SentinelOne expands its AI Security Platform with new DSPM features to help secure AI systems amid...
Emerging research highlights the need to protect AI skills from cyber threats in critical sectors.