A wave of recent high-profile cyber incidents has underscored the UK’s persistent vulnerability to increasingly sophisticated digital threats. This risk is accelerating as artificial intelligence becomes more tightly woven into the fabric of everyday business operations. From enabling innovation to sharpening decision-making, AI now plays a pivotal role in how organisations generate value and stay ahead. However, the advantages it offers are accompanied by emerging risks that many organisations are still unprepared to tackle.
New findings from CyberArk reveal that AI represents a multifaceted “triple threat”. It is being weaponised by attackers, deployed as a tool for defence, and - most concerningly - introducing critical new weaknesses in security. As this complex risk environment continues to evolve, identity security must form the foundation of any AI approach, serving as a crucial pillar of organisational resilience in the years ahead.
How AI is evolving known threats AI has raised the bar for traditional attack methods. Phishing, which remains the most common entry point for identity breaches, has evolved beyond poorly worded emails to sophisticated scams that use AI-generated deepfakes, cloned voices and authentic-looking messages. Nearly 70% of UK organisations fell victim to successful phishing attacks last year, with more than a third reporting multiple incidents. This shows that even robust training and technical safeguards can be circumvented when attackers use AI to mimic trusted contacts and exploit human psychology.
It is no longer enough to assume that conventional perimeter defences can stop such threats. Organisations must adapt by layering in stronger identity verification processes and building a culture where suspicious activity is flagged and investigated without hesitation.
The defensive potential of AI While AI is strengthening attackers’ capabilities, it is also transforming how defenders operate. Nearly nine in ten UK organisations now use AI and large language models to monitor network behaviour, identify emerging threats and automate repetitive tasks that previously consumed hours of manual effort. In many security operations centres, AI has become an essential force multiplier that allows small teams to handle a vast and growing workload.
Almost half of organisations expect AI to be the biggest driver of cybersecurity spending in the coming year. This reflects a growing recognition that human analysts alone cannot keep up with the scale and speed of modern attacks. However, AI-powered defence must be deployed responsibly. Over-reliance without sufficient human oversight can lead to blind spots and false confidence. Security teams must ensure AI tools are trained on high-quality data, tested rigorously, and reviewed regularly to avoid drift or unexpected bias.
AI Is opening new doors for attackers The third element of the triple threat is the rapid growth in machine identities and AI agents. As employees embrace new AI tools to boost productivity, the number of non-human accounts accessing critical data has surged, now outnumbering human users by a ratio of 100 to one. Many of these machine identities have elevated privileges but operate with minimal governance. Weak credentials, shared secrets and inconsistent lifecycle management create opportunities for attackers to compromise systems with little resistance.
Shadow AI is compounding this challenge. Research indicates that over a third of employees admit to using unauthorised AI applications, often to automate tasks or generate content quickly. While the productivity gains are real, the security consequences are significant. Unapproved tools can process confidential data without proper safeguards, leaving organisations exposed to data leaks, regulatory non-compliance and reputational damage.
Addressing this risk requires more than technical controls alone. Organisations should establish clear policies on acceptable AI use, educate staff on the risks of bypassing security, and provide approved, secure alternatives that meet business needs without creating hidden vulnerabilities.
Building AI strategies around identity security Securing AI-driven organisations requires that identity security be integrated into every layer of the digital strategy. This involves gaining real-time visibility over all identities - whether human, machine, or AI agent - enforcing least privilege principles consistently, and continuously monitoring for any unusual access activity that might indicate a security breach.
Leading organisations are already evolving their identity and access management frameworks to address the specific challenges posed by AI. This includes implementing just-in-time access controls for machine identities, tracking privilege escalation carefully, and ensuring AI agents are subject to the same stringent oversight as human users.
AI holds significant promise for organisations prepared to adopt it responsibly, but without strong identity security measures, that promise can quickly turn into a liability. Success will favour those that recognise resilience as fundamental, not optional, to sustained growth and innovation.
In an era where adversaries have equal access to AI, one principle remains clear: securing AI starts and finishes with securing identity.