Major security, privacy and ethical blind spots in AI development

Majority of organisations neglect due diligence during the artificial intelligence development phase as they struggle with data issues, skill shortages and cultural resistance.

  • Tuesday, 15th October 2019 Posted 6 years ago in by Phil Alsop

O’Reilly, the premier source for insight-driven learning on technology and business, has revealed the results of its 2019 ‘AI Adoption in the Enterprise’ survey. The report shows that security, privacy and ethics are low-priority issues for developers when modelling their machine learning (ML) solutions.
 
Security is the most serious blind spot. Nearly three-quarters (73 per cent) of respondents indicated they don’t check for security vulnerabilities during model building. More than half (59 per cent) of organisations also don’t consider fairness, bias or ethical issues during ML development. Privacy is similarly neglected, with only 35 per cent checking for issues during model building and deployment.
 
Instead, the majority of developmental resources are focused on ensuring artificial intelligence (AI) projects are accurate and successful. The majority (55 per cent) of developers mitigate against unexpected outcomes or predictions, but this still leaves a large number who don’t. Furthermore, 16 per cent of respondents don’t check for any risks at all during development.
 
This lack of due diligence is likely due to numerous internal challenges and factors, but the greatest roadblock hindering progress is cultural resistance, as indicated by 23 per cent of respondents.
 
The research also shows 19 per cent of organisations struggle to adopt AI due to a lack of data and data quality issues, as well as the absence of necessary skills for development. The most chronic skills shortages by far were centred around ML modelling and data science (57 per cent). To make progress in the areas of security, privacy and ethics, organisations urgently need to address these talent shortages.
 
“AI maturity and usage has grown exponentially in the last year. However, considerable hurdles remain that keep it from reaching critical mass,” said Ben Lorica, chief data scientist, O’Reilly. 
 
“As AI and ML become increasingly automated, it’s paramount organisations invest the necessary time and resources to get security and ethics right. To do this, enterprises need the right talent and the best data. Closing the skills gap and taking another look at data quality should be their top priorities in the coming year.”
 
Other key findings include:
  • The overwhelming majority of organisations (81 per cent) have started down the route of AI adoption. Most are in the evaluation or proof of concept stage (54 per cent), while 27 per cent have revenue-bearing AI projects in production. 
  • A significant minority (19 per cent) of companies have not started any AI projects.
  • Machine learning has emerged as the most popular form of AI used by enterprises. Nearly two-thirds (63 per cent) use supervised learning solutions while 55 per cent are using deep learning technology. Model-based methods are used by almost half (48 per cent) of respondents.
  • AI is most likely to be used in research and development (R&D) departments (50 per cent), customer service (34 per cent) and IT (33 per cent). Legal functions have seen the least innovation, with only 5 per cent making use of AI technologies.
  • TensorFlow (55 per cent) and scikit-learn (48 per cent) are the most popular AI tools in use today. 

Exploring a framework for AI security and governance focusing on real-world efficacy and...
Fiverr’s new AI Video Hub enables brands to work directly with AI video creators on a range of...
ANS enhances its standing with dual Microsoft designations, focusing on AI realisation and...
Polarise and vCluster Labs partner to provide European mid-market enterprises with AI...
Fortinet presents its unified SOC platform and FortiOS 8.0 updates to tackle AI-driven threats with...
Foxit's recent report challenges prevailing assumptions about AI's productivity benefits, revealing...
Exploring Keysight's new solution for error performance validation in AI-focused data centres,...
Databricks launches Genie Code, an autonomous AI agent designed to assist data engineers with...