An Inbuilt Watchman

An Inbuilt Watchman

Adversarial and image generation networks battle it out to create fail-proof machine learning

 A self-driving car approaches a stop sign, but instead of slowing down it accelerates into the busy intersection. In March 2019, security researchers at Tencent managed to trick a Tesla Model S into switching lanes. All they had to do was to place a few inconspicuous stickers on the road signage.  An accident report later revealed that four small rectangles had been stuck to the face of the sign. These fooled the car’s onboard artificial intelligence (AI) into misreading the word ‘stop’ as ‘speed limit 45’.

The technique exploited glitches in the machine learning (ML) algorithms that power Tesla’s Lane Detection technology in order to cause it to behave erratically. In separate experiments elsewhere, researchers have deceived facial-recognition systems by sticking a printed pattern on glasses or hats. And they have tricked speech-recognition systems into hearing phantom phrases by inserting patterns of white noise in the audio.

These are known as adversarial attacks which are nothing but manipulative actions that undermine machine learning performance, cause model misbehaviour or acquire protected information. Machine learning is on its way to becoming ubiquitous as part of numerous every-day applications we use – right from the facial recognition lock on mobile phones to Alexa’s voice recognition function and the spam filters in our emails. But the pervasiveness of machine learning – and its subset, deep learning – has also given rise to adversarial attacks that manipulate the behaviour of algorithms by providing them with carefully crafted input data. The biggest fear is that adversarial vulnerabilities can be weaponized to attack AI-powered systems.

According to a MIT report, the majority of adversarial research targets image recognition systems, but reconstruction systems that are powered by deep-learning-based images can be equally susceptible. Such attacks can be especially perilous for the healthcare sector, where deep-learning-based image technology comes in the aid of radiographers while reconstructing diagnostic images obtained through CT or MRI scanning processes. The raw images serve as data that can be meaningfully interpreted only through algorithms. However, any targeted adversarial attack would distort the interpretation – potentially inducing the diagnostic system to construe incorrect findings, simply because the images had been wrongly interpreted. Dangerous, because actions based on such reports often stand between life and death.

Ms Bo Li and her colleagues had been spearheading the research against adversarial attacks. Li is an Assistant Professor in the Computer Science Department at University of Illinois at Urbana-Champaign – specialising in machine learning, data security, privacy, and game theory. She has received accolades this year as one of “MIT Technology Review Innovators Under 35”. Her research team at the University have put forth a novel approach in way of safeguarding deep-learning systems. To make them more reliable and secure for applications where safety is critical, Li and her team have reworked the training strategy for deep-learning systems. As the MIT report puts it: they pit the neural network responsible for image reconstruction against another neural network responsible for generating adversarial examples, in a style like GAN algorithms.

This means that, through repetitive rounds, the adversarial network tries to fool the reconstruction network into producing things that are not part of the original data, or the real findings. As a countermeasure, the reconstruction network keeps on adapting and updating itself in a continuous bid to dodge being misled by the attacks. It is similar to having an inbuilt team of watchmen who   keeps prodding their master to keep awake and alert.

Primary test reports are encouraging, and the team asserts that adversarial training helped neural networks working in image datasets to reconstruct and interpret with greater accuracy – compared to neural networks that used other, traditional, safety measures. The team is ready to present its findings at the International Conference on Machine Learning, scheduled to be held this month. They are confident that with further enhancements, this method could open up unprecedented security possibilities for neural networks.

Leave a comment

Your email address will not be published. Required fields are marked *

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us