Assessing AI deployment risks and security challenges

Despite security concerns, organisations are advancing with AI deployment, underscoring governance gaps and highlighting rising risks.

In the evolving technology landscape, TrendAI has released research examining AI deployment alongside security and compliance challenges. The study looks at how organisations are adopting AI despite known risks.

The research surveyed 3,700 business and IT decision-makers. It found that 67% reported feeling pressure to approve AI projects despite security concerns, with one in seven describing those concerns as “extreme” but proceeding in response to competitive and internal demands.

The findings indicate that AI adoption is, in some cases, occurring ahead of governance measures, with systems being introduced without fully established security controls. Security teams are often responding to AI deployment decisions after the fact, which can contribute to the use of unsanctioned or “shadow” AI tools.

Additional findings show that cybercriminals are using AI to support activities such as reconnaissance and phishing, increasing the speed and scale of attacks.

The study also highlights a gap between AI adoption and oversight. While 57% of respondents say AI is advancing faster than they can secure it, 55% report only moderate confidence in their understanding of the legal frameworks governing AI. Around 38% of organisations have comprehensive AI policies in place, while others are still developing them.

Confidence in autonomous AI systems remains limited. The report states that 44% of respondents believe such systems will significantly improve cybersecurity in the short term, while concerns persist around data access, misuse, and oversight.

Respondents identified several key risks, including AI agents accessing sensitive data (42%), malicious prompts (36%), and an expanded attack surface (33%). A similar proportion (33%) highlighted risks related to misuse of trusted AI systems and autonomous code deployment.

The report also notes that 31% of organisations report limited observability or auditability of AI systems, raising questions about monitoring and intervention after deployment.

Around 40% of respondents support the introduction of AI “kill switch” mechanisms to shut down systems in cases of failure or misuse, while nearly half remain uncertain.

The findings indicate that organisations are continuing to deploy AI systems while governance, visibility, and control measures are still developing.
Advantech and SecEdge collaborate to support security for AI models at the edge, including use in...
Oracle has unveiled Fusion Agentic Applications, a suite designed for outcome-driven execution in...
Advantech has partnered with Qualcomm Technologies to advance edge AI with the SKY-641E3 server,...
Snowflake introduces Project SnowWork, an AI platform aimed at accelerating workflows and...
Salute teams up with Phaidra to support AI operations in high-density data centres with operational...
SailPoint introduces a novel AI governance solution to monitor and control unauthorised AI tool...
Scality's research reveals challenges in scaling AI infrastructure, highlighting storage's critical...
Rapid7's latest report highlights the shrinking timelines in cyber threat landscapes and...