Less-than-One-Shot no more a Long Shot

Less-than-One-Shot no more a Long Shot

Ground-breaking new AI technology set to completely revolutionise the way models are trained

It is rather rare in the current landscape to find a viable advancement in artificial intelligence that is not based, at least in part, on machine learning. However, a recent research paper released by the University of Waterloo in Ontario, Canada may potentially change the landscape of how AI learns – and may prove to be a decisive development in the AI lifecycle.

How machine learning ‘learns’, is essentially quite simple. Upon being fed large hordes of training data, the system assimilates thousands of data points, eventually processing and ‘learning’ from it. It is then used for several other purposes, such as in data imputation (for missing data), forecasting models, predictions etc. This is, however, inarguably a tedious process given the quantity of training data required in the first place.

The human brain, however, is different. Imagine an imaginary object o, being shown to a person once. Once they have registered this piece of information, it is highly likely that from that point onwards whenever the person sees the same object again, they can recognise it as the same object, o. Machine Learning is now trying to mimic just this: through a process called LO-shot learning (or, Less-than-One-shot learning).

LO-shot High Return

Make no mistake, LO-shot learning can prove to be an absolute game-changer. It could allow machines to learn far more rapidly than they do currently, and in the same manner as humans do. This is, of course, useful for a wide variety of reasons – not in the least in scenarios where large datasets do not exist for training.

According to a researcher involved in the project, LO-shot learning “theoretically explores the smallest possible number of samples that are needed to train machine learning models”, thereby making it extremely promising – almost revolutionary in the context of the AI revolution. According to researcher Ilia Sucholutsky:

“We found that models can actually learn to recognize more classes than the number of training examples they are given. We initially noticed this result empirically when working on our previous paper on soft-label dataset distillation, a method for generating tiny synthetic datasets that train models to the same performance as if they were trained on the original dataset. We found that we could train neural nets to recognize all 10 digits — zero to nine — after being trained on just five synthetic examples, less than one per digit. … We were really surprised by this, and it’s what led to us working on this LO-shot learning paper to try and theoretically understand what was going on.”

 Although the project is still at its nascent stages, it is seen as one with massive upside potential across several diverse sectors, such as medical imaging, volcanology and cybersecurity, all of which have recently come to rely heavily on artificial intelligence to carry out its tasks. The next step in the process is to produce and optimise algorithms that can work on LO-shot learning.

© 2023 Praxis. All rights reserved. | Privacy Policy
   Contact Us
Praxis Tech School
PGP in Data Science