Worldwide, AI adoption has increased exponentially during the COVID-19 pandemic. Yet, several challenges are to be overcome until AI becomes proper ‘mainstream’.
Image: The Rate of AI adoption; Source: KPMG
The Immortal Dictator?
That the adoption of Artificial Intelligence has skyrocketed during the pandemic will not come as a surprise to most: most government decision-makers and business leaders concur that AI is already ‘at least moderately to fully functional’ in most of their organisational processes. In fact, a recent survey from Consulting giants KPMG covering 950 business/IT decision-makers (of organisations with over $1 billion revenue) has found 93% of firms involved in the industrial manufacturing sector, over 80% each in the financial services, technology and retail sectors, 77% in life sciences, 67% in healthcare and 61% in the government sector to have moderately or fully employed AI technologies in their processes.
Yet, of certain intrigue is another finding by the very same survey: research suggests that around 40% of the executives at major corporations are now concerned about AI adoption ‘moving too fast’, citing an urgent need for improved (and increased) AI regulation instead. According to tech research outlet VentureBeat, “An overwhelming percentage of respondents also agreed that governments should be involved in regulating AI technology in the industrial manufacturing (94%), retail (87%), financial services (86%), life sciences (86%), technology (86%), health care (84%), and government (82%) sectors.”
According to KPMG, the fact that most business leaders (generally adverse to oversight) are calling for greater regulation may be a sign that they are looking for greater clarity when it comes to AI processes. Although most are familiar with the massive theoretical potential of AI, not too many want to risk being held accountable for its business outcomes by later regulations. This makes sense, especially with technology leaders such as Elon Musk’s grim premonitions that AI could eventually become ‘an immortal dictator from which we would never escape’.
Generally speaking, although most organisations around the world are getting more and more adept at successfully deploying AI, there is massive room for improvement. Data Science teams over the world are still trying to find their best machine learning operations (MLOps) practices to not only train models faster but also deploy inference engines with greater consistency.
Additionally, with newer data sources coming to the fore every other day and business conditions being in a constant state of flux, retraining large AI models has become a major challenge for businesses. This is because most businesses have not yet achieved the level of AI maturity needed to retrain and deploy models at scale — most still need to update models into their application development and deployment processes, still widely employed on existing DevOps principles.
Issues such as the above, coupled with several others, are often leading to sub-optimal usage of AI technologies. For example, most organisations don’t yet have adequate governance mechanisms in place to detect algorithmic biases or any drift being created because of it either. According to VentureBeat, “Most organizations are also still wrestling with AI explainability. A set of machine learning algorithms will generate slightly different results on different days as they become more familiar with a set of data. But it’s hard to distinguish between algorithms that are simply learning and what might be signs of drift caused by outlier data.”
Hence, currently, AI is being treated no differently than a business risk that executives are employing. Regardless, given its very high stakes and unlimited upside potential, one can rest assured that AI adoption will only rise higher. Furthermore, as issues such as model retraining at scale and algorithmic biases are tackled head on, the AI appeal is only set to increase further. The call for improved regulation, however, is not one that will stop anytime soon.