CISOs confident about data privacy and security risks of generative AI

Over half of CISOs believe generative AI is a force for good and a security enabler, whereas only 25% think it presents a risk to their organisational security.

  • 5 months ago Posted in

New data from the latest members’ survey of the ClubCISO community, in collaboration with Telstra Purple, highlight CISOs’ confidence in generative AI in their organisations. Around half of those surveyed (51%), and the largest contingent, 50%) believe these tools are a force for good and act as security enablers. In comparison, only 25% saw generative AI tools as a risk to their organisational security.

The study's findings underscore the proactive stance of CISOs in comprehending the risks linked to generative AI tools and their active support in implementing these tools across their respective organisations.

45% of respondents suggested they now allow generative AI tools for specific applications, with the CISO office making a final decision on their use. Only a quarter (23%) also have region-specific or function-specific rules to govern generative AI use. The findings represent a marked change from when generative AI applications first landed following the launch of ChatGPT and when data privacy and security concerns were top-of-mind risks for organisations.

Despite ongoing concerns around the data privacy of specific applications, 54% of CISOs are confident they know how AI tools will use or share the data fed to them, and 41% have a policy to cover AI and its usage. In contrast, only a minority (9%) of CISOs say they do not have a policy governing the use of AI tools and have not set out a direction either way.

Inspiring further confidence, 57% of CISOs also believe that their staff are aware and mindful of the data protection and intellectual property implications of using AI tools.

Commenting on the findings, Rob Robinson, Head of Telstra Purple EMEA, sponsors of the ClubCISO community, said, “While we do still hear examples of proprietary data being fed to AI tools and then that same data being resurfaced outside of an organisation’s boundaries, what our members are telling us is that this is a known risk, not just in their teams, but across the employee population too.”

He continued, “Generative AI is rightly being seen for the opportunity it will unlock for organisations. Its disruptive force is being unleashed across sectors and functions, and rather than slowing the pace of adoption, our survey highlights that CISOs have taken the time to understand and educate their organisations about the risks associated with using such tools. It marks a break away from the traditional views of security acting as a blocker for innovation.”

Developer productivity and quality engineering has passed the tipping point of adopting generative...
HCLTech has launched its advanced AI Transformation academy in partnership with Multiverse, a...
Wireless Logic report reveals significant demand for eSIM, remote SIM provisioning and robust...
Partnership with IFS to boost efficiency, increase operational agility and support planning and...
Tech Mahindra has established a Center of Excellence (CoE) powered by NVIDIA platforms to drive...
ISACA research shows automating threat detection/response and endpoint security are the most...
Deployed in minutes without code or consultants, Freddy AI Agent delivers fast time to value,...
Splunk has released The State of Observability 2024 report in collaboration with Enterprise...