Domino Data Lab has released its 2024 REVelate survey report, which uncovers a troubling disconnect between AI ambitions and the resources required to execute responsible AI governance. The survey, which included responses from 117 AI leaders, reveals that while 97% of organizations have set goals for responsible AI, nearly half (48%) are under-resourced to implement the necessary governance frameworks.
The findings highlight a growing readiness gap in the enterprise AI landscape, where responsible AI is increasingly seen as essential for innovation and compliance, but where resource constraints prevent full execution.
“Despite the growing recognition of responsible AI’s value, many enterprises are ill-equipped to enforce governance, risking financial penalties, reputational damage, and stunted innovation,” said Dr. Kjell Carlsson, head of AI Strategy at Domino Data Lab. “Combine the desire to scale AI to all parts of the business, with increasing regulation at an international, state and even municipal level, and it becomes more important than ever for organizations to govern the development and deployment of AI effectively.”
AI Governance Emerges as a Strategic Priority, but Resources Lag Behind
The survey illustrates that responsible AI is now a top business priority, with 43% of leaders rating it as “extremely critical” to driving business outcomes, outpacing traditional priorities like business intelligence. Nevertheless, resource shortages remain a major obstacle. Despite these efforts, nearly half of survey respondents (48%) continue to cite resource constraints as the biggest barrier to implementing effective AI governance, alongside having insufficient technology to govern AI (38%).
High Stakes: The Costs of Inadequate AI Governance
The risks of failing to properly govern AI are substantial. The survey found that regulatory violations are the top concern for 49% of respondents, with potential fines under regulations like the EU AI Act reaching as high as 7% of global annual revenue. Beyond regulatory concerns, 46% of respondents fear reputational damage and stalled innovation if governance issues are not addressed.
Financial costs also weigh heavily on organizations, with 34% of respondents reporting increased operational expenses due to errors in poorly governed AI systems.
Balancing Innovation and Regulation
While there is broad support for AI regulations, with 71% of AI leaders believing that regulations will ensure the safe use of AI, there is concern that overly strict governance might slow down innovation. Nearly half (44%) of respondents worry that regulations could hamper AI adoption.
The survey also reveals a divide in opinions on the current state of AI governance: 51% of respondents doubt that existing regulatory frameworks are enforceable in their current form, highlighting the ongoing need for better-defined and implementable standards.
The Path Forward: Implementing AI Governance Frameworks
To address the governance challenges, many organizations are prioritizing frameworks that translate responsible AI principles into practice. The survey found that 47% of companies are focused on defining responsible AI principles, while 44% are deploying governance platforms to ensure policies are applied consistently across the AI lifecycle. The practice of forming AI ethics boards ranks significantly lower; only 29% ranked it a top approach to implementing responsible AI. Additionally, logging and auditability (74%), reproducibility (68%), and monitoring (61%) emerged as the most critical capabilities needed to support responsible AI.