It isn’t just a trend or something to be leveraged for one-off projects and use cases. The ability to become a true AI enterprise by successfully scaling and employing robust data projects and processes at all levels of the company is an organisational asset pivotal to the success of businesses of the future, regardless of the industry.
Many businesses struggle to get started on their Enterprise AI journey, but there are also some who are thriving. The companies that succeed are the ones that go beyond leveraging Enterprise AI for one particular project or use case and instead focus on scaling it out to a level that will sustain the business in the future. Scaled Enterprise AI companies manage to build a foundation for data science, machine learning, and AI at an organisational level. Here’s a quick look at how some of them are fully embedding AI in the enterprise.
Leverage Horizontal (Team-Wide) and Vertical (Cross-Team) Collaboration
When we think about Enterprise AI, a key focus of companies that are achieving success is the ability and proven capacity to fully embed AI as a regular capability inside new processes and ways of working. This means bringing all people together around AI efforts, from business people to analysts and data scientists. Traditionally, delivering this has been challenging for companies, because it demands a full transformation and a full value chain reorganisation as well as a mass effort to upskill people.
A key part of getting everyone on the same page is creating spaces where data is accessible. These companies are opening sandboxes where people can learn, experiment, and put governance in place that creates a positive incentive to testing. They are also putting the right checks in place before machine learning-based models are industrialised or put into production.
It all goes back to the virtues of collaboration, and the virtues of sharing and democratising initial access, so that even business people, who may not understand everything in terms of data science, are in a better capacity to actually iterate in the right manner with data scientists. In the same environment, data scientists get closer to the business bottom line so that they really understand the needs of the business, and so on.
We may come to know it as ‘Inclusive AI,’ or simply a concept that encompasses the idea that the more people are involved in AI processes, the better the outcome (both internally and externally) because of a diversification of skills, points of view, or use cases.
Being able to iterate rapidly on a spectrum of data applications — whether that means building out a self-serve analytics platform or fully operationalised AI integrated with business processes — is key to fully embedding Enterprise AI within an organisation.
The reality around most data projects is that they don’t bring real value to the business until they’re in a production environment. Therefore, if this process isn’t happening quickly enough — both in terms of total overall start-to-finish time-to-insights as well as the ability to rapidly iterate once something is in production — efforts will fall flat. Proper tools that allow for quick, painless incorporation of machine learning models in production are the key to a scalable process.
Speed is also of the essence in that feedback from models in production should be delivering timely results to those who need it. For example, if the data team is working with the marketing team to operationalise churn prediction and prevention emails, the marketing team should have immediate insight into whether the churn prevention emails sent to predicted churners are actually working, or if they should re-evaluate the message or the targeted audience.
Prioritise Data Governance
Data governance is certainly not a new concept — as long as data has been collected, companies have needed some level of policy and oversight for its management. Yet until recently, it has largely remained in the background, as businesses weren’t using data at a scale that required data governance to be top of mind. In the last few years, and certainly in the face of 2020’s tumultuous turn of events, data governance has shot to the forefront of discussions both in the media and in the boardroom as businesses take their first steps towards Enterprise AI.
Recent increased government involvement in data privacy (e.g. GDPR and CCPA) has no doubt played a part, as have magnified focuses on AI risks and model maintenance in the face of the rapid development of machine learning. Companies are starting to realise that data governance has never really been established in a way to handle the massive shift toward democratized machine learning required in the age of AI. And that with AI comes new governance requirements.
Data governance needs to be a collaboration between IT and business stakeholders. A traditional data governance program oversees a range of activities, including data security, reference and master data management, data quality, data architecture, and metadata management.
Those responsible for data governance will have expertise in data architecture, privacy, integration, and modeling. However, those on the information governance side should be business experts, understanding what the data is, where it comes from, how and why the data is valuable to the business, how the data can be used in different contexts, and how data should ultimately be used for optimised business benefit.
Embedding AI in the enterprise successfully is not an insignificant task. However, if businesses can work to understand data at a more detailed level, including its constraints, they are well on their way to understanding what they can do with that data to develop additional insights to transform the way they are working.