Celebrate Your Worth

…And let there be Light

The age of Photonic Tensors is now

With global advancements in Artificial Intelligence and machine learning technologies coming to the fore, there is an elevated need for strengthening computational hardware in order to deal with the increased complexities of high-level neural intelligence. The underlying principle is rather simple: the smarter the task, the more complex the data; and the more complex the data, the more powerful the processor needs to be. The ongoing trend suggests a move towards heterogeneous optical data transmission instead of the current electricity-based paradigms which suffer from slower data transmission between processor and memory.

The Electronic Tensor-Core Processing Unit

Over the last few years, with powerful graphics becoming essential to data science and deep learning, the popularity of tensor cores for workload management increased drastically. This was primarily based on the fact that they were found to be almost 3-times more effective than traditional Graphics Processing Units (GPU), facilitated by a stronger signal and increased energy efficiency. They essentially allow for big data problems to be analysed quicker and for deep learning devices and neural networks to process data more intuitively.

Tensor Cores are primarily one or two dimensional data structures (“programmable matrix-multiply-and-accumulate units”, with dimensions depending on ‘tensor rank’) that deliver faster training and inference applications for neural networks. The unique thing about them, as opposed to their preceding CUDA cores, is the fact that they “enable AI programmers to use mixed-precision to achieve higher throughput without sacrificing accuracy.”

The first Tensor Processing Unit (TPU) for machine learning was developed by Google and launched in 2016, as an AI-accelerator application-specific integrated circuit (ASIC). It was meant to work on its TensorFlow software, an end-to-end platform that facilitates building and deploying machine learning models. It is, in fact, a do-it-yourself software package – both for beginners and experts – to create their own machine learning models for desktop, mobile, web, and cloud. The tensor-core revolution, however, was especially accelerated after Nvidia launched VOLTA, and marketed it as the AI engine of the future.

TPUs were famously used in DeepMind’s AlphaGo, “the first computer program to defeat a professional human Go player – a landmark achievement that experts believe was a decade ahead of its time”. Apart from this, and AI-chess engines like AlphaZero, Google has also used TPUs in several other places, such as in StreetView text processing or Google Photos image processing. Primarily though, they are crucial in executing RankBrain – the brain behind the Google search engine.

Based on recent research at George Washington University (GWU), however, it seems the future of tensor processing units are set to see a radical change.

The Birth of the Photonic Tensor Core for Machine Learning

Research from GWU reveals “several synergistic physical properties” that could improve functionality in the several architectures required for deep learning. While quite a few photonic neural network design options were explored, the integrated 4-bit photonics-based tensor core was found to improve electrical data transmission by 1 degree of magnitude. For optical data processing, the photonic tensor core was found to be better by almost 2-3 orders of magnitude as compared to an electrical tensor core unit, while comparing similar chip areas.

Image source: ‘Photonic tensor cores for machine learning’, by Mario Miscuglioa and Volker J. Sorger.
[Applied Physics Review 7, 031404, (2020)]. Find the paper
here.

Given the development of new Phase Change Materials (PCM) and development of new photonic memory chips, the realisation of engines based on ‘all-optical’ photonic tensor units is not far away. This will allow for the significant speeding up of intelligent tasks in AI and machine learning, without requiring conventional electro-optic conversions and external memory access. Given the fact that TPUs have the ability to process faster than traditional GPUs by 2-3 orders of magnitude, (through the filtering and regulation of data traffic that goes towards cloud systems and data centres), it will quicken optical data processing considerably.

Following its development, researchers are now looking to use photonic TPUs to replace brain functionalities in a variety of applications within AI. The future for photonic-specialised processors looks especially bright, as they have the potential to augment both current electronic systems as well as upcoming network-edge technologies in the 5G spectrum – and beyond.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

Admission Test Dates for 2021

PGP in Data Science For Jan 2021 intake

Dates: 21-11-20 , 28-11-20

PGP in Cyber Security for 2021 intake

Dates: 21-11-20 , 28-11-20

PGP in Data Engineering for 2021 intake

Dates: 21-11-20 , 28-11-20

Admission Test Dates for 2021 intake