Breaking Down Barriers to Growth

Breaking Down Barriers to Growth

Israeli start-up DeepCube is efficiently tapping the full potential of Deep Learning through software

Are AI-based models really sustainable in terms of cost? Estimates from recent research show the following: (1) the University of Washington’s Grover fake news detection model cost US$25,000 to train in about two weeks; (2) OpenAI reportedly racked up a whopping US$12 million to train its GPT-3 language model; (3) Google spent an estimated US$6,912 training BERT, a bi-directional transformer model that redefined the state of the art for 11 natural language processing tasks.

No doubt it is exhilarating to see AI researchers pushing the performance of cutting-edge models to new heights. But the costs of such processes are also rising at a dizzying rate – and even reaching the peak of its computational capabilities in many cases. This is a cause for concern indeed. Perhaps certain auxiliary software-based solutions can provide an answer.

The end of the last AI winter

Historically, the hype around AI has almost always travelled cyclically, in booms and busts. The highs of the 1970s were followed by a period of prolonged stagnation in research, leading to marked pessimism in the media and, thereby, a considerable reduction of funding throughout the 1980s. This was again followed by a meteoric rise through the dot-com bubble and a subsequent fall when the dot-com crashed in the early 2000s.

As of 2020, although the hype around artificial intelligence and its future prospects can be thought to be peaking, several research outcomes (such as very recent research from MIT) reveal that its capabilities are set to be constricted in the near future, constrained by the size and speed of algorithms and the need for costly hardware.

It goes without saying that the deep learning models of today have set new benchmarks for computer performance in a wide plethora of tasks. Its prodigious appetite for computing power, however, imposes a limit in itself – how far can it improve in its current form without being constrained by computational limits? While it is true that an explosion of computing power over the last two decades has almost certainly ended the cyclical occurrence of an AI winter, research shows that the growth trajectory will soon be arrested – especially in an era where improvements in hardware performance is slowing as well.

The likeliest impacts of these computational limits will be to either (1) force deep learning algorithms into less computationally intensive methods of improvement, or (2) pushing machine learning techniques towards a greater degree of efficiency than deep learning.

Enter the DeepCube

Israel-based start-up DeepCube is set to change the computational landscape by building the first-of-its kind “software-based inference accelerator”, which claims to drastically improve deep learning performance on existing hardware. This will increase efficiency in the deployment of deep learning-based models on intelligent edge devices – in fact, DeepCube claims to be the ‘only’ efficient technology on that front. The software is designed to run on any type of hardware, including GPUs, processors and AI accelerators – claiming a 10x improvement in speed, along with substantial memory reduction as well.

DeepCube works by producing considerably more lightweight models irrespective of whether the model is a machine learning algorithm, using a convoluted neural network (CNN) or a recurrent neural network (RNN). This is achieved through proprietary automated techniques designed by DeepCubethat are highly optimised for “running sparse deep learning models for inference”. As a result, there is“dramatic speedup and memory reduction on any existing hardware.”

Usability: Data centres, Semiconductors and the Edge

Most global data centres training large deep learning models typically require large amounts of memory and dedicated hardware (e.g. CPUs, GPUs, Edge chips, etc.). Consequently, this has led to most deep learning deployment to be limited to the Cloud. In spite of this, the attached costs and computational requirements are rather massive. To achieve decentralised processing away from the Cloud and yet maintain efficiency, DeepCube provides a solution that efficiently allows for deep learning deployment on Edge devices.

With most data centres also opting to replace CPUs with GPUs for processing, CPU providers can stay competitive by simply offering a software update on current hardware. The resultant performance is directly comparable to GPU performance – and at a fraction of the initial runtime.

Leading consultants McKinsey & Company sums up the situation succinctly in a recent report:

“The AI and (deep learning) revolution gives the semiconductor industry the greatest opportunity to generate the value that it has had in decades. Hardware can be the differentiator that determines whether leading-edge applications reach the market and grab attention. As AI advances, hardware requirements will shift for computing, memory, storage, and networking—and that will translate into different demand patterns. The best semiconductor companies will understand these trends and pursue innovations that help take AI hardware to a new level.”

With applications of DeepCube covering the entire AI deployment market, including sectors like healthcare, retail, finance and government among others, the future of software-based accelerators looks rather bright in breaking down the barriers to growth in the deep learning industry. DeepCube is a big step in the right direction.

Leave a comment

Your email address will not be published. Required fields are marked *

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us