The rising threat of AI weaponisation in cybersecurity

AI's accelerated role in creating cyber threats necessitates new security measures.

This week, Anthropic revealed a concerning development - hackers have weaponised its technology for a series of sophisticated cyber-attacks. With artificial intelligence (AI) now playing a critical role in coding, the time required to exploit cybersecurity vulnerabilities is diminishing at an alarming pace.

Kevin Curran, IEEE senior member and cybersecurity professor at Ulster University, highlights the methods attackers employ when using large language models (LLMs) to uncover flaws and expedite attacks. He emphasises the need for organisations to partner robust security practices with AI-specific policies amidst this changing landscape.

Curran explains, "This shows just how quickly AI is changing the threat landscape. It is already speeding up the process of turning proof-of-concepts – often shared for research or testing – into weaponised tools, shrinking the gap between disclosure and attack. An attacker could take a PoC exploit from GitHub, feed it into a large language model and quickly get suggestions on how to improve it, adapt it to avoid detection or customise it for a specific environment. That becomes particularly dangerous when the flaw is in widely used software, where PoCs are public but many systems are still unpatched."

“We’re already seeing hackers use LLMs to identify weaknesses and refine exploits by automating tasks like code completion, bug hunting or even generating malicious payloads designed for particular systems. They can describe malicious behaviour in plain language and receive working scripts in return. While this activity is monitored and blocked on many legitimate platforms, determined attackers can bypass safeguards, for example by running local models without restrictions.

Curran concludes, "The bigger issue is accessibility. Innovation has made it easier than ever to create and adapt software, which means even relatively low-skilled actors can now launch sophisticated attacks. At the same time, we might see nation-states using generative AI for disinformation, information warfare and advanced persistent threats. That’s why security strategies can’t just rely on traditional controls. Organisations need AI-specific defences, clear policy frameworks and strong human oversight to avoid becoming dependent on the same technology that adversaries are learning to weaponise."

As AI continues to evolve, so too does its potential for misuse in cyber arenas. This calls for innovative solutions and strategic thinking to counteract its possible threats, ensuring digital realms remain secure.

ABB unveils the UK's first medium-voltage UPS at Ark Data Centres, setting a benchmark in AI-ready...
Lenovo's new portfolio addresses AI and data storage needs, offering innovative solutions for...
Sage's commitment to AI innovation earns recognition in the 2025 IDC MarketScape report,...
Research reveals generational and geographic divides in AI adoption, highlighting challenges for...
AI speeds up software delivery but raises concerns about safety, pressure, and risk for developer...
EY and NVIDIA join forces to launch a groundbreaking AI platform and Lab to drive enterprise-scale...
AI is no longer a sidekick but a central player in revenue decision-making, redefining productivity...
Qlik announces its collaboration with AWS, enhancing data sovereignty and security for European...