AI governance adoption is leveling off – what it means for enterprises

AI governance adoption is leveling off – what it means for enterprises

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - August 3. Join AI and data leaders for insightful talks and exciting networking opportunities. Learn more about Transform 2022



Despite the need to maintain the integrity and security of data in enterprise artificial intelligence (AI)  systems, an alarming number of organizations lack proper AI governance policies and tools to protect themselves from potential legal issues, O’Reilly Media researchers report. 

Among respondents with AI products in production, the number of those whose organizations had a governance plan in place to oversee how projects are created, measured and observed was roughly the same as those that didn’t (49% yes, 51% no). Among respondents who were evaluating AI, relatively few (22%) had a governance plan.

‘Disturbing’ AI governance trend

“The large number of organizations lacking AI governance is disturbing,” Mike Loukides, VP of content strategy at O’Reilly and the report’s author, told VentureBeat. “While it’s easy to assume that AI governance isn’t necessary if you’re only doing some experiments and proof-of-concept projects, that’s dangerous. At some point, your proof-of-concept is likely to turn into an actual product, and then your governance efforts will be playing catch-up. 

“It’s even more dangerous when you’re relying on AI applications in production,” Loukides said. “Without formalizing some kind of AI governance, you’re less likely to know when models are becoming stale, when results are biased or when data has been collected improperly.”

This year’s survey results showed that the percentage of organizations reporting AI applications in production — that is, those with revenue-bearing AI products in production — has remained constant over the last two years, at a modest 26%, indicating that AI has passed to the next stage of the hype cycle, Loukides said.

“For years, AI has been the focus of the technology world,” Loukides said. “Now that the hype has died down, it’s time for AI to prove that it can deliver real value, whether that’s cost savings, increased productivity for businesses or building applications that can generate real value to human lives. This will no doubt require practitioners to develop better ways to collaborate between AI systems and humans, and more sophisticated methods for training AI models that can get around the biases and stereotypes that plague human decision-making.”

Among respondents with AI products in production, the number of those whose organizations had a governance plan in place to oversee how projects are created, measured and observed (49%) was roughly the same as those that didn’t (51%). 

As for evaluating risks, unexpected outcomes (68%) remained the biggest focus for mature organizations, followed closely by model interpretability and model degradation (both 61%). Privacy (54%), fairness (51%) and security (42%) — issues that may have a direct impact on individuals — were among the risks least cited by organizations. While there may be AI applications where privacy and fairness aren’t issues, companies with AI practices need to place a higher priority on the human impact of AI, Loukides said.

“While AI adoption is slowing, it is certainly not stalling,” O’Reilly president Laura Baldwin said in a media advisory. “There are significant venture capital investments being made in the AI space, with 20% of all funds going to AI companies. What this likely means is that AI growth is experiencing a short-term plateau, but these investments will pay off later in the decade. 

“In the meantime, businesses must not lose sight of the purpose of AI: to make people’s lives better. The AI community must take the steps needed to create applications that generate real human value, or we risk heading into a period of reduced funding in artificial intelligence.”

Other findings include the following: 

  • Among respondents with mature practices, TensorFlow and scikit-learn (both 63%) are the most used AI tools, followed by PyTorch (50%), Keras (40%) and AWS SageMaker (26%).
  • Significantly more organizations with mature practices are using AutoML to automatically generate models. Sixty-seven percent of organizations are using AutoML tools, compared with 49% of organizations the prior year, representing a 37% increase.
  • Among mature practices, there was also a 20% increase in the use of automated tools for deployment and monitoring. The most popular tools in use are MLflow (26%), Kubeflow (21%), and TensorFlow Extended (TFX, 15%). 
  • Similar to the results of the previous two years, the biggest bottlenecks to AI adoption are a lack of skilled people and a lack of data or data quality issues (both at 20%). However, organizations with mature practices were more likely to see issues with data, a hallmark of experience.
  • Both organizations with mature practices and those currently evaluating AI were in agreement on the lack of skilled people being a significant barrier to AI adoption, though only 7% of the respondents in each group listed this as the most important bottleneck. 
  • Organizations with mature practices saw the most significant skills gaps in these areas: ML modeling and data science (45%), data engineering (43%) and maintaining a set of business use cases (40%). 
  • The retail and financial services industries have the highest percentage of mature practices (37% and 35%, respectively). Education and government (both 9%) have the lowest percentage but the highest number of respondents who are considering AI (46% and 50%, respectively).

The complete report is now available for download here.


VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.