Best Practices for Making AI Deployments More Ethical and Sustainable

By Jan Stappers, Director of Regulatory Solutions at NAVEX.

  • 1 year ago Posted in

Artificial Intelligence, commonly called AI, encompasses a wide range of capabilities and has gone through many cycles of hype and technological advancement over the years. Now, AI can perform many of the cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with an environment, problem-solving, and exercising creativity. 

Many people have interacted with AI regularly over the years with technology such as voice assistants or pop-up website customer service chatbots. However, it is important to note that its applications extend far beyond the conveniences of the internet and are frequently used within private systems to address diverse needs. In fact, recent research conducted by NAVEX and Forrester Consulting found that 98% of survey respondents expect AI to play a crucial role in GRC programs of the future – expecting this revolutionary technology to enhance performance and enable operational improvements.

Today, AI is developing rapidly, especially with the advent of easily accessible Natural Language Processing (NLP), popularized for consumer use by ChatGPT. But even five years ago in 2018, which feels like a lifetime ago from a technology perspective, the European Commission Joint Resource Centre predicted the global focus on artificial intelligence. "There is strong global competition on AI among the US, China, and Europe. The US leads now, but China is catching up fast and aims to lead by 2030. For the EU, it is not so much a question of winning or losing a race but of finding the way of embracing the opportunities offered by AI in a way that is human-centered, ethical, secure, and true to our core values." 

Modern businesses are jumping at the chance to incorporate artificial intelligence to speed up and solve real-world workplace problems. Using AI can improve efficiency by operationalizing repeatable tasks, enabling more accurate data analysis, and facilitating predictive analytics, resulting in time and cost savings. Many companies are starting to use AI in their Governance, Risk, and Compliance (GRC) data analysis and reporting. 

Governance and Legislation

When deploying AI, companies should ensure they meet all applicable regulations and encourage its sustainable and ethical use. At present, there are many global initiatives, governance, and legislation policies organizations should align to depending on the relevant jurisdiction. Some of the proposed and existing AI governance include:

European Union: AI Act

United States: Blueprint AI Bill of Rights

Australia: Ethics Framework

Brazil: Rules for Artificial Intelligence 

Japan: Governance Guidelines for AI Principles in Practice

IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems

OECD: Tools for Trustworthy AI

UNESCO: Recommendation on the Ethics of Artificial Intelligence

The European Union AI Act

Artificial Intelligence holds the potential to usher in numerous benefits, including improved healthcare, safer and more sustainable transportation, enhanced manufacturing processes, and cost-effective energy solutions – and with great capability comes great responsibility. As such, the European Union embarked on a bold initiative to regulate AI as a critical component of its digital strategy to foster its responsible and innovative development. In April 2021, the European Commission introduced the world's inaugural regulatory framework for AI, designed around a risk-based approach. This evaluates AI systems used across various applications and categorizes them according to the risks they may pose to users. The extent of regulation varies depending on the risk level, ensuring more advanced and potentially risky AI applications receive heightened scrutiny. These comprehensive rules require AI systems to be overseen by humans; with transparent, traceable, non-discriminatory, and environmentally friendly practices for evaluation. Notably, the regulations impose bans on practices such as biometric surveillance, emotion recognition and predictive policing AI systems.

Moreover, the EU recognizes the need for customized regulations for general-purpose AI and NLP models, acknowledging the diverse nature of AI applications. To facilitate innovation, the EU establishes regulatory sandboxes – real-world environments where public authorities can test AI systems before deployment. Further, the regulations empower individuals with the right to voice complaints about AI systems, ensuring their concerns are addressed and that AI operates in a manner aligning with societal values and norms. This groundbreaking AI Act sets a global precedent for the responsible and secure adoption of AI technologies.

While the framework recommendations are not binding, they will help organizations orient around a principles-based approach to AI risk management. They will be well-positioned for the future of AI regulation. Led by the organization's experience managing regulatory change, the steering committee should stay abreast of this quickly evolving landscape to ensure they stay on the right side of any new rules or laws. With the potential of technology and the promised regulatory attention, starting early is to your benefit.

Understanding the Risks

Likely, your organization is already using AI in some unofficial capacity, also known as "shadow AI," either directly or through third-party vendors. While these applications may not pose immediate risks, it is crucial to gain a comprehensive understanding of their various uses and ensure they align with your company's ethical and security standards. Additionally, AI is already integrated safely into many of the products and software you regularly employ. However, it is imperative to incorporate AI into the overall assessment of its usage within your organization.

The Chief Compliance Officer (CCO) should closely collaborate with their IT and cybersecurity counterparts with AI governance as a pivotal driver for this alignment. This collaborative approach aids in dismantling internal risk silos, ensuring that AI-related risks are comprehensively assessed across the entire enterprise. The 2023 NAVEX 2023 State of GRC Report highlights that right now, “Four in ten respondents (43%) said they are challenged by data silos.”  Given the far-reaching impact of AI on the business, the first step in establishing AI governance is forming a steering committee. It should remain adaptable as AI technology is continuously evolving, as is its utilization within the organization. Involving stakeholders such as the Chief Information Security Officer (CISO), Chief Information Officer (CIO), General Counsel, and others is essential to holistically address bias-related concerns, privacy, security, legal compliance, and customer satisfaction.

AI governance is a long-term strategy that will undergo continuous evolution and must encompass the active participation of multiple departments.

Best Practices

To ensure AI is deployed sustainably and ethically, businesses should follow the below best practices:

Define company uses and technology requirements for AI

Conduct thorough third-party risk assessments of any chosen AI provider

Businesses led by CCO to evaluate vendor risks

Establish and enforce appropriate country policies and use of governance

Communicate with relevant users and employees about AI tool usage

Provide mandatory AI training during new hire onboarding, and extend to all other appropriate employees and users

Ensure a "private system" is in place that allows AI to integrate with software for search, analysis, translation, and more – private systems allow companies to avoid exposing confidential information while still using AI

Overall, the world of AI is rapidly evolving with immense potential to transform industries and improve workplace efficiency. Businesses are increasingly recognizing its value in boosting efficiency, enabling data-driven decision-making, and achieving critical business goals. 

However, to ensure ethical and sustainable AI deployment, best practices must include defining technology requirements, conducting thorough risk assessments, and providing training to employees. Embracing private systems that allow AI integration while protecting confidential information is another valuable strategy.

As AI adoption continues to grow, organizations have a commitment to responsible and secure AI adoption. By embracing ethical AI use and staying ahead of regulatory changes, companies can ensure they are well-positioned to harness the potential of artificial intelligence, while mitigating its associated risks.


By David de Santiago, Group AI & Digital Services Director at OCS.
By Krishna Sai, Senior VP of Technology and Engineering.
By Danny Lopez, CEO of Glasswall.
By Oz Olivo, VP, Product Management at Inrupt.
By Jason Beckett, Head of Technical Sales, Hitachi Vantara.
By Thomas Kiessling, CTO Siemens Smart Infrastructure & Gerhard Kress, SVP Xcelerator Portfolio...
By Dael Williamson, Chief Technology Officer EMEA at Databricks.