A Decade of AI :Part 2

A Decade of AI :Part 2

If one technology had impacted the maximum number of domains in the last 10 years –it was AI. We continue with our whirlwind tour of the captivating timeline of AI innovations

In our previous episode, we saw how routine data crunching gradually evolved into Data Science, and how Machine Learning (ML) and Artificial Intelligence (AI) contributed to a total facelift of the domain. We also covered the formative years of AI between 1980 and 2010 and how various innovations in the second decade of the new century pushed the frontiers of AI research. 

Let’s continue with further AI innovations through the remaining years of the decade.  

2010 – 2020: The fascinating decade (continued)

2013

  • Google Glass Beta: Google introduced a beta test version of Google Glass – an eyewear display device with AR/AI capabilities, like facial recognition and translation. Though originally marketed as a consumer novelty, over time it has proved to be an excellent industrial tool for immersive hands-on training in simulated environments. 
  • Never Ending Image Learner (NEIL): An ambitious computer program, NEIL keeps on gathering information about images it finds on the internet, non-stop. A Carnegie Mellon University project, the objective was to teach common sense relationships – that comes naturally to humans – to a machine by analysing real-life images. 
  • Atlas: This is a humanoid robot that gradually evolved to carry out several human activities, like: opening and closing doors, climbing a ladder, driving, and operating a fire hose.  Boston Dynamics developed it to perform search and rescue operations in hazardous environments.

2016

  • Generative adversarial networks (GAN): Ian Goodfellow created GAN – a machine learning system where two neural networks compete against each other to create better solutions to problems. It is a powerful AI tool which opened the fabulous possibility of machines with creativity and imagination, that could actually generate original content.
  • Google acquires DeepMind: Google purchases the UK supercomputing initiative DeepMind for an astronomical sum of US$500 million, making it clear for the first time that big tech companies are considering AI development a serious investment. 
  • DeepFace from Facebook: As if in response to the Google purchase, Facebook researchers announced developing a neural network, called DeepFace, that could recognise faces with an accuracy rate of more than 97%.
  • Tesla AutoPilot cars: Tesla Motors released Model S cars equipped with AI-based AutoPilot system. It could self steer, brake, adjust speeds according to the prevailing limits on road real-time, and also park on its own. 
  • Alexa: Amazon released its own version of virtual assistant. Originally embedded in the smart speaker systems under the Echo series, Alexa responded to voice commands by answering questions. As requested, it could also play music available over the internet, make to-do lists, set alarms, stream podcasts, provide weather forecasts and other real-time news bulletins. Alexa can also be used as a home automation system to control smart devices.

2015

  • Open AI: Tesla owner Elon Musk founded the non-profit organisation OpenAI for advanced artificial intelligence research, especially in the deep reinforcement learning domain.
  • Google driverless car: Google gets into the auto-driving mode by showcasing its own version of driverless car, powered by the Waymo model.
  • Machines beat humans at image recognition: Both the Microsoft and Google AI systems proved to be more accurate at image recognition than humans at the sixth ImageNet Large Scale Visual Recognition Challenge. Deep learning algorithms – derived from artificial neural networks that imitated the human brain – enabled these machines to identify over 1,000 categories. This now allowed AI systems to first recognise something and then decide appropriate action based on that recognition, just like we do.
  • TensorFlow: Google open-sourced its deep learning framework TensorFlow – an end-to-end open-source platform for machine learning. With a comprehensive, flexible ecosystem of tools, libraries and community resources TensorFlow enabled frontier research in ML. 

2016

  • AlphaGo: Google’s DeepMind-driven AI gaming system “AlphaGo” defeated Go world champion Lee Sedol in four out of five times. Go is an extremely complicated game of strategy, many times more complex than chess. Only human intelligence was expected to tackle it. This was for the first time a machine won over a human professional Go master.
  • Deepfakes: The Face2Face software created world’s first deepfake videos. Leveraging AI/ML techniques to create and doctor audio or video footage, deepfakes have been at the centre of controversy ever since.
  • Sophia: is by Hong Kong based company Hanson Robotics developed Sophia – a social humanoid robot. It could imitate human gestures and expressions, answer selective questions, and conduct simple conversations on pre-defined topics.
  • Tensor Processing Units (TPU): Created by Google specifically for neural network machine learning, tensor processing units are AI accelerator application-specific integrated circuits (ASIC). It enabled radical innovations at Google.
  • Google Assistant: Google harnessed its natural language processing algorithm to develop an AI-powered virtual assistant capable of two-way conversation. A competitor to Alexa and Siri, Google Assistant could search the Internet, schedule events, set alarms, change hardware settings on the user’s device, and furnish Google account information.

2017

  • Transformer ML model: The deep machine learning model, Transformer, was developed. Mostly a key tool for natural language processing (NLP), Transformer, is designed to handle ordered sequences of data – similar to recurrent neural networks (RNNs). Useful for tasks like machine translation and text summaries.
  • Open Neural Network Exchange (ONNX): Facebook and Microsoft collaborated with AWS, Nvidia, Qualcomm, Intel, and Huawei to develop the Open Neural Network Exchange (ONNX). It is an open format for representing deep learning models and allows models to be trained in one framework and transferred to another.

2018

  • Painting by AI sold at Christie’s: A set of AI-created paintings created by generative adversarial network technology (GAN) sold at a Christie’s auction for US$ 400,000. They were generated by a Paris-based art collective using a two-part algorithm that learnt the painting style by analysing 15,000 portraits between the 14th and 20th centuries. 
  • BERT: Developed by Google, this is the first bidirectional, unsupervised language representation that can be used on a variety of natural language tasks using transfer learning.
  • AlphaFold: This algorithm uses a wide range of existing genomic data records to predict protein structure. The 3D models of proteins generated by AlphaFold are far more accurate than any similar attempt done before.

2019

  • Explainable AI: As AI was being employed to make crucial decisions, the black box algorithm approach gave way to systems where the decision-making flow was transparent – so that humans could quickly intervene in case there is a malfunction at any step. This is now gradually becoming the new standard among companies developing machine learning models.
  • Lung cancer diagnosis by AI: A Google initiative in collaboration with Northwestern Medicine, an AI system powered by deep learning analysed computer tomography (CT) scans and correctly diagnosed lung cancer. It was far more accurate than human radiologists. 
  • Robotic arm solves Rubik’s Cube: Dactyl, a robot hand, was trained by OpenAI to solve the Rubik’s Cube. Although trained in simulated environment, it could transfer the knowledge into a new situation successfully. OpenAI used automatic domain randomisation technique and improved its problem-solving capabilities.
  • TensorFlow 2.0 released: This upgraded version of TensorFlow had updates like eager execution, intuitive higher-level APIs, dynamic graphs, and flexible model building irrespective of the platform. 

2020

  • Fighting the virus: AI/ML-based tools played a prominent role in predicting the spread of the COVID-19 pandemic, as well as in performing accelerated simulations for virtual drug testing that helped develop all the potential vaccine candidates.
  • Automation for social distancing: Robotics played a huge role to maintain social distancing as more and more tasks and services were automated – revolutionising domains as diverse as food delivery, housekeeping, healthcare, supply-chain, manufacturing, travel & hospitality and retail. 
  • Regulatory and Ethical awareness: As AI/ML keeps on changing the fabric of the society, Ethics in AI, Bias in AI and democratization of AI are issues that received serious considerations. There has been a global push to develop effective regulatory framework for AI – and people in general are becoming more aware of this issue.

What next?

In their quest to make devices more self-sufficient, Data Scientists keep on refining machine learning algorithms to produce more intelligent and autonomous AI. Experts predict that AI will ultimately be able to understand human feelings and interact seamlessly with them. We are set to witness an age of extensive automation, which will revolutionise industries like healthcare, finance, education, transportation, and defence.

That brave new world can be made possible only through Data Science.

Read more of our blogs

Image by Gerd Altmann from Pixabay

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us