AI: Good or Bad for the Cyber Threat Landscape?

By Tom Huckle, Director of Information Security and Compliance at BlueVoyant.

  • 9 months ago Posted in

In recent months, the buzz surrounding AI technology has grown rapidly, due in large part to the release – and subsequent zeitgeist moment – of ChatGPT. A chatbot fuelled by language modeling AI technology that is free to the public, ChatGPT has been the subject of seemingly endless discourse regarding its implications since its launch last November.

This type of AI technology is convincing and well…intelligent. It’s almost like a contemporary iteration on the concept of a search engine – you can type in a prompt, and within moments you’ll receive a well-articulated, seemingly accurate response pulling from sources all over the web.

“AI has a significant impact on cyber security and is both a valuable tool for defenders and a potential threat in the hands of attackers.” That’s the response to a prompt I posed to ChatGPT. More or less, it summed up some of my thoughts on the matter – that AI is neither intrinsically good nor bad for the security space; it simply is.

We’ve been hearing some murmurs about potentially nefarious applications of AI, and so it seemed time to set the record straight. AI has revolutionised various aspects of our lives, and the world of cyber security is no exception. So how, exactly, does AI impact cyber security? Can attackers use it to launch cyberattacks that endlessly improve upon themselves, rendering even the most advanced security technology powerless? Are security vendors making use of it to enhance their platforms and defend against increasingly sophisticated attacks? The intersection of AI and the cyber threat landscape presents both challenges and opportunities, highlighting the need for proactive measures to stay ahead of emerging threats.

Business as usual… except for the unusual parts

Let’s make one thing clear right off the bat: AI is not new. Technology companies have been using AI and machine learning (ML) to augment parts of their platforms for years now. Everyday commodities like navigation apps and autocorrect functions use AI, BlueVoyant uses AI to optimise every facet of its platform, as do countless other software vendors in all sorts of industries.

In speaking to some of the brilliant people on my team who help drive product strategy and development, I learned that our AI is built on the backs of our human intelligence. Our expert cyber threat analysts pool their knowledge of threat actor habits, activity, watering holes, and behaviour to create the framework of our harvesting systems, enabling us to automatically monitor for threats emerging across the open, deep, and dark web. The nuance of human experiences and intelligence paired with the power of machine learning allows cyber security experts to scale seamlessly, detecting threats to any number of organisations and their subsidiaries.

And that seems like a good segue to the flip side of this coin – how AI is helping the baddies win. Earlier this year, an experiment with OpenAI’s Auto-GPT led to the creation of ‘ChaosGPT’ and showcased how easily these platforms can be used for nefarious purposes. When prompted to “destroy humanity,” “establish global dominance,” and “attain immortality”, the AI project recruited a network of AI agents to help it research nuclear weapons and even attempted a social media influence campaign before it was stopped.

Phenomenal cosmic power - and chaos potential

The proliferation of AI-based tools has given rise to a new breed of cyber attack that can be leveraged much faster and more efficiently than before. Adversaries can harness AI algorithms to now automate and scale their attacks, but that is scratching the surface of its potential.

Bad actors can now use AI to analyse the possible attack vectors within their target and then execute the best option for attack whilst evading detection and, as is the nature of today’s voracious AI algorithms, these tactics can adapt and evolve in real time. Similarly, AI-powered malware can adapt to mimic legitimate software behaviours, thereby evading traditional security measures.

Within phishing campaigns, AI enables a highly targeted approach, complete with convincing messages that are difficult to distinguish from genuine communications. The biggest risk today lies in AI’s ability to increase the volume of attacks by putting the process of deploying a phishing kit on autopilot. The problem itself doesn’t change, but its scope becomes greatly magnified. Tools like 10Web that allow users to clone and produce websites en masse will help facilitate significant increases in the sheer numbers of phishing websites leveraging spoofed domains.

What does it mean for security teams?

The good news? While AI has generated buzz over the past year, security providers have been investing in AI and machine learning since day zero. Supported by significant research and development investment into phishing infrastructures and evasion mechanisms, machine learning algorithms are already able to detect lookalike domains, lookalike logos and graphics, proprietary HTML and IP infringement, fake social media profiles, and more.

In defence and detection teams, AI can analyse vast amounts of data, identifying patterns and detecting anomalies in real-time. Machine learning algorithms allow teams to quickly identify and respond to suspicious activities, and AI-powered systems bolster network security by autonomously adapting to new threats with proactive defence mechanisms. Microsoft Security Copilot was one of the first security products released with this in mind, combining the advanced large language model (LLM) with its own security-specific model.

In the aftermath of a cyber attack, AI can play a vital role in incident response and forensic investigations. For example, AI-powered tools can sift through massive volumes of log data, network traffic, and system activity to identify the root cause of the breach and notice details and behaviors that a human may miss. These tools can also reconstruct attack timelines, tracing the path of the attacker and identifying compromised systems. AI’s fast learning capabilities allow cyber security professionals to learn from historical incidents and will help organisations build robust incident response plans for future attacks.

As an industry, we can amplify the power of AI in combatting cyber threats through collaborative efforts. Information sharing between organisations and security vendors will enable AI systems to learn from diverse datasets, resulting in more comprehensive threat intelligence. Moreover, international cooperation and collective defence will facilitate the development of AI-driven security frameworks, making it harder for attackers to exploit vulnerabilities across different industries and geographies.

New Barracuda report explores why just 43% of organizations surveyed have confidence in their...
Zero-trust networks deployable, at scale, in as little as 6 minutes, addresses current industry...
RAGroup increases activity by over 300% since its last known attacks in December 2023, entering the...
Bitdefender has launched Bitdefender Voyager Ventures (BVV), a new investment initiative dedicated...
Coveware by Veeam will bring 'industry-leading' cyber-extortion incident response services and...
Zscaler has released the Zscaler ThreatLabz 2024 Phishing Report, which analyzes 2 billion blocked...
Thales has released the 2024 Imperva Bad Bot Report, a global analysis of automated bot traffic...
Egress has launched its third Phishing Threat Trends Report 2024, detailing key trends, new data,...