AI impersonation is now the hardest cyberattack vector to defend against

Advanced forms of social engineering are on the rise, though obvious gaps like weak passwords are becoming easier to plug.

  • 1 month ago Posted in

AI impersonation is now the hardest vector for cybersecurity professionals to protect companies against, according to Teleport’s 2024 State of Infrastructure Access Security Report. The study, which surveyed 250 senior US and UK decision-makers, shows that social engineering remains one of the top tactics cybercriminals use to install malware and steal sensitive data, with the advancement of AI and deepfakes further fueling the effectiveness of phishing scams.

When asked to rank the difficulty of each attack vector, AI impersonation was most commonly cited by 52% of respondents as a difficult vector. The findings indicate the changing landscape of social engineering, with threat actors employing more advanced phishing tactics that focus on credentials as a target. Criminals have recently created infamous tools like WormGPT – the hackabot-as-a-service – to design more convincing phishing campaigns or deepfake impersonations.

“The reason AI impersonation from cyber criminals is so difficult to defend against is that it is getting better and better at mimicking legitimate user behavior with high accuracy, making it challenging to distinguish between genuine and malicious access attempts,” says Ev Kontsevoy, CEO at Teleport.

“AI and deepfake tools aren’t just creating phishing campaigns, but significantly lowering the time and cost to launch the campaigns as well to near-zero. They’re so easy today that a kid in Nebraska could sit in his Mom’s basement and launch hundreds of these per day.”

To fight this wave of AI impersonation and other security threats, a broad majority (68%) are already using AI-infused tools to make safeguards more accurate and effective. Industry debate continues to take place on the effectiveness of fighting AI with AI.

“The findings here suggest a risk of overconfidence in AI’s ability to protect organizations against social engineering. Using AI to combat this threat is like suggesting that an adversary targets the missiles on a fighter jet, rather than the fighter jet itself,” says Kontsevoy. “The right conversation is, ‘how do we stop employees and enterprises from making their credentials a threat vector?’ As it stands, credentials are pretty much littered across the many disparate layers of the technology stack – Kubernetes, servers, cloud APIs, specialised dashboards and databases, and more.”

The rise in identity-based attacks was cited by 87% of respondents as an ‘important’ factor that is contributing to making infrastructure access security more challenging over time. Almost 40% of companies, however, still haven’t implemented the use of cryptographically authenticated identities for users. These factors combined likely help explain why social engineering attacks like phishing and smishing still represent the second hardest attack vector to defend against (48%), with compromised privileged credentials and secrets (47%) in third place.

On the opposite end of the difficulty curve, ‘weak’ passwords are now the easiest attack vector to defend against according to respondents: only 36% struggle to protect against this vector, and 45% even say it’s ‘easy’ to do so.

“I think what this shows is that the cybersecurity industry is becoming better at plugging the most obvious gaps and weak points. There has certainly been some regulation, such as in the UK, to clamp down on weak passwords. But passwords are just one of many vectors, and credentials exist in many forms like API keys and browser cookies. There are also standing privileges to worry about, which can lead to breach-and-pivot attacks,” adds Kontsevoy.

“Regardless of whether social engineering attacks use AI or not, the solution is always going to be the same: eliminate human error. That means the modern-day security paradigm has to be first and foremost about eliminating secrets and enforcing cryptographic identity, least privileged access, and robust policy and identity governance. Human behaviour exposing infrastructure to data leaks is what we need to learn to defend against, and enforcing these steps are the key to preventing criminals from wreaking more havoc.

Predictive maintenance and forecasting for security and failures will be a growing area for MSPs...
Venafi has published the findings of its latest research report: The Impact of Machine Identities...
Arctic Wolf to enhance its Security Operations Aurora Platform with best-in-class endpoint...
Nearly 50% of organisations have experienced a security breach in the last two years.
New study by Splunk shows that a significant number of UK CISOs are stressed, tired, and aren’t...
HP Wolf Security Study highlights cybersecurity challenges facing organizations across the...
Internal test shows estimated scanning speeds of 75,000 backups within 60 seconds.
Deployment allows Korea Hydro and Nuclear Plant (KHNP) to leverage quantum-safe MACsec technology...