How will the AI arms race impact cybersecurity?

By Danny Lopez, CEO of Glasswall.

  • 2 weeks ago Posted in

With the EU AI Act in force for several months, organisations will soon start to feel the weight of regulatory pressure around the development and deployment of AI. Post-Brexit, it will still apply to UK businesses with customers in the EU and is part of the wider growth in legislation designed to ensure the technology can be controlled.

 

Closer to home, the previous UK government set out a ‘pro-innovation’ approach to regulating AI in a 2023 White Paper, which proposes a sector-specific framework as opposed to an overarching law. Given the recent change in administration, it remains to be seen how this strategy may change in the next few years.

 

Whichever way you look at it, AI regulation represents an enormous challenge. The list of exploits with the potential to impact AI design, development, deployment and maintenance is already long, and as more models come to market, it is certain to grow considerably. When incidents occur – as they inevitably will – organisations will need to work extremely hard to mitigate their impact. For some, this will prove to be an extremely damaging and expensive process.

 

Growing risks

 

What will remain outside of regulatory control, however, is the scope AI gives threat actors to systematically increase the volume and sophistication of their efforts. For instance, AI is being used to design and deliver advanced phishing attacks that mimic legitimate communication more effectively than ever. AI tools also allow attackers to automate the process of exploiting vulnerabilities, including password cracking and zero-day exploits – and do so with minimal effort compared to pre-AI methods.

 

Looking beyond core cybersecurity issues, AI's ability to generate highly convincing text, images and videos poses significant risks, and is routinely used to manipulate public opinion and to carry out election interference activities. The additional risks are becoming increasingly clear, with one industry study finding that 80% of security stakeholders had already logged AI-generated email attacks or strongly suspected their organisations had been targeted.

 

According to analysis published by the NCSC, the short-term cybersecurity threat posed by AI “comes from evolution and enhancement of existing tactics, techniques and procedures (TTPs).” Looking further ahead, the commoditisation of AI-enabled capabilities “will almost certainly make improved capability available to cyber crime and state actors.” This includes lowering the barrier to entry for non-technical individuals or groups, resulting in further growth in the levels of ransomware.

 

To give this some wider context, cyber attacks are already taking place on a mind-boggling scale, with the CEO of JPMorgan Chase, for example, quoted earlier this year saying, "There are people trying to hack into JPMorgan Chase 45 billion times a day.” Adding AI to the mix promises to take the problem to a new level.

 

Fighting back

 

On the other side of the technology arms race, however, AI is being implemented across the cybersecurity industry to significantly improve protection. For example, by analysing vast datasets, AI leverages machine learning to identify patterns and anomalies that indicate new attack vectors, such as polymorphic malware or advanced ransomware, which often evade existing technologies.

 

As well as detection, AI is helping to improve incident response by automatically activating protocols such as system isolation and real-time attack neutralisation to minimise damage. AI is also democratising cybersecurity knowledge by implementing virtual assistants that offer personalised advice, making expertise more easily accessible to non-technical users. Crucially, AI complements human oversight by providing security professionals with advanced tools and real-time insights, helping to create a more resilient cybersecurity framework that adapts to evolving threats.

 

So, how is this playing out in practice? A recent PwC report revealed that over two-thirds of organisations plan to use AI for cyber defence in the next year, while half already use the technology for cyber risk detection and mitigation. In fact, the cybersecurity industry and its customers are ploughing investment into advanced AI-powered technologies designed to stay ahead of the risks. According to one estimate, the global AI in cybersecurity market size will grow from $22 billion last year to over $60 billion by 2028.

 

Looking further ahead, a joined-up approach and deeper cooperation at a government level and across the security ecosystem will be vital if cybercrime and nation-state attacks are to be kept in check. What is certain is that the industry will continue to innovate to give organisations the tools they need to approach the future with confidence.

By Krishna Sai, Senior VP of Technology and Engineering.
By Oz Olivo, VP, Product Management at Inrupt.
By Jason Beckett, Head of Technical Sales, Hitachi Vantara.
By Thomas Kiessling, CTO Siemens Smart Infrastructure & Gerhard Kress, SVP Xcelerator Portfolio...
By Dael Williamson, Chief Technology Officer EMEA at Databricks.
By Ramzi Charif, VP Technical Operations, EMEA, VIRTUS Data Centres.
Companies are facing a Catch 22 when it comes to the need to invest in new forms of AI, whilst...