“We see what we want to see” – or so the saying goes. It is not entirely untrue. Out of the vast display of visual information in front of us every moment, our eye is trained to pick and choose only what is relevant for any particular situation and pass it on to our brain. Such pre-processing cuts down on the information overload, leading to faster processing of visual information – almost instantaneous in this case.
This is where computer vision has always proved inadequate. Although essential for several AI applications like smart sensors, industrial robots and driverless cars, current image recognition technology – the backbone of computer vision – captures all visual data available within its range and analyses everything to determine what it sees. Naturally, this means crunching a huge amount of data, requiring much computing power, and slowing down things in the process. Such an artificial eye could never work at the speed of an animal eye; real time!
This is where scientists at the Institute of Photonics in Vienna, Austria, have nailed the problem. Merging electronic light-sensors with a neural network on a tiny chip, they have designed a unique artificial eye that can process visual inputs within nanoseconds, way faster than any existing image sensor. The enhanced speed allows simultaneous capturing and processing of images, resulting in real-time image recognition investing much less power.
Made of a sheet of tungsten diselenide –an inorganic compound that is a very stable semiconductor – the chip is barely a few atoms thick. It is etched with light-sensing diodes that are wired up to form a neural network. Using the photosensitive nature of the diodes, the network can be trained to classify visual information fast, and real time. The set-up is still rudimentary, but this chip can already handle quite a few standard machine-learning tasks.