Data First, Code Second

Data First, Code Second

Commercial AI adoption beyond tech industries is still lagging behind expected levels. There are quite a few challenges, but none of them insurmountable.

Artificial Intelligence, in spite of all its vast potential, hasn’t really caught on with a majority of global firms. Of course, global tech giants such as Amazon, Google or Baidu – rich both in terms of data and resources – have, in myriad different ways, been making use of novel AI-based technologies. But, for the $13-trillion annual projection for the global AI market to come true, several other sectors, including healthcare, manufacturing and agriculture will need to catch up – and fast.

Why has the rate of AI adoption been this slow though? The Harvard Business Review opines:

“The playbook that these consumer internet companies use to build their AI systems — where a single one-size-fits-all AI system can serve massive numbers of users — won’t work for these other industries.

Instead, these legacy industries will need a large number of bespoke solutions that are adapted to their many diverse use cases. This doesn’t mean that AI won’t work for these industries, however. It just means they need to take a different approach.”


It’s this ‘different approach’ that has been the central problem. Although a large percentage of surveyed executives across sectors have shown great interest in adopting AI technologies, they’re not sure how to – or specifically, how exactly AI will be able to have an immediate and lasting impact on their businesses.

Challenges to AI

While the advantages of using AI for technology firms may seem rather obvious, it is much less so for traditionally non tech-heavy firms. To bridge this gap, experts opine, industries need to adopt a new data-centric approach with careful attention towards how (and what kind of) data can be useful to them, and thence, be used to train their personalised AI models. The key is to start the programming process with the data itself – not just the code. The challenges to overcome, however, are a fair few:

  • Lack of adequate Data: Unlike most internet-based consumer firms where there are millions of data points for AI programs to learn from, data sets are usually much smaller for other industries. Esteemed technologist Andrew Ng opines: “For example, can you build an AI system that learns to detect a defective automotive component after seeing only 50 examples? Or to detect a rare disease after learning from just 100 diagnoses? Techniques built for 50 million data points don’t work when you have only 50 data points.”

Image courtesy: Gartner

  • Customisation costs: A major issue, in this regard, is owing to the shortage of AI talent, and hence the higher costs attached to hiring them. This is also directly related to the value attached to the kind revenues being generated, as well. Although the aggregate value of all projects involved in a non-tech enterprise may be rather high, the marginal costs involved are much higher as well, both in terms of capital and manpower.
  • Gap between conception and production: There is often a major bridge formed between the conceptualisation of an AI model in a lab and its deployment in a real-world scenario. As Andre NG writes, we often find teams celebrate a successful ‘proof of concept’, whereas deployment may still require over a year of further work.

Hence, for AI systems to fully realise their potential, a systematic approach to solving said problems across industries is going to be crucial. A data-centric approach to AI, supported ably by tools for the successful building, deployment and maintenance of AI applications – a well-built Machine learning operations (MLOps) platform, for example, may go a long way to fast-moving companies who want a heads-up over their competitors.

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us