The risk of accidental data exposure by generative AI (GenAI) is a growing concern as the number of employees using these tools continues to increase, with limited visibility into the information being shared on these platforms. At the same time, many businesses are trying to strike a balance between the need for innovation and to minimise security risks.
“GenAI tools like ChatGPT and Bard are extremely popular and offer significant growth opportunities for companies, however, unchecked usage poses significant risks for organisations,” says James Moore, Founder and CEO, at CultureAI. “Without visibility of how employees are using AI tools, organisations cannot implement the real-time coaching required to help employees harness the power of these tools safely and effectively”.
CultureAI is the first human risk management provider to offer real-time visibility into the accidental disclosure of sensitive data or misuse of GenAI tools, along with offering tailored coaching in response. To minimise friction for employees, the solution only flags a risk when sensitive data such as personally identifiable information (PII) is copied into GenAI applications. The solution can also track if employees are logging into GenAI apps with corporate credentials.
The solution uses pattern detection (sequences of characters), in addition to specific words or phrases, indicating that confidential information has been posted on GenAI platforms. Organisations can set up and monitor their own unique patterns and/or terms or use out-of-the-box matched patterns created by CultureAI such as tax codes or national insurance numbers. These are regularly reviewed for accuracy and organisations can also weigh them by how concerned they are: high, medium, and low.
When it comes to reporting, thanks to the Generative AI solution, security teams have immediate visibility of when and where an employee submits PII and other confidential data to an AI tool. At this point, an open risk will appear on the CultureAI Human Risk Dashboard which can be triaged.
CultureAI’s Generative AI solution will help organisations gain visibility of any risks and enable them to orchestrate appropriate coaching and interventions, providing several key benefits:
· Real-time employee education: Just-in-time education delivers targeted guidance or training to employees precisely when they need it, enhancing their ability to safely utilise AI tools.
· Risk reduction: Targeted Coaching significantly reduces the probability of accidental disclosure of sensitive information over time, reducing the risk of security breaches.
· Comprehensive reporting: Easily track and analyse behaviour changes over time through easily digestible and shareable analytics and reporting tools.
· Compliance: The solution aids in upholding compliance with data protection regulations and standards in the workplace.
Organisations can monitor GenAI applications such as ChatGPT, Bard, and Bing through Microsoft Edge and/or Google Chrome extensions. If customers are already utilising these extensions with CultureAI, they can enable the Generative AI solution with a click of a button.