Advanced AI technologies that create almost real deep fakes, emerge as the biggest misinformation threat to society – and tech majors are trying to counterstrike
With less than two months to go for the 2020 US presidential elections, Silicon Valley tech companies are racing to protect the election’s integrity. According to a new report published by the University of College of London, deepfakes have emerged as the number one threat to society, governments and organisations. The report had examined 20 different methods of using AI to carry out crimes and ranked these methods based on various factors. The ranking methodology considered variables like how easy the crime would be to commit, the potential societal harm the crime could do, the amount of money the criminal could make, and how difficult the crime would be to stop. The results predicted that Deepfakes posed the greatest threat to law-abiding citizens and society generally, as their potential for exploitation by criminals and terrorists is high.
This year, nine tech companies – Facebook, Google, Microsoft, Twitter, Pinterest, Reddit, Verizon, LinkedIn, and Wikimedia – signed a joint statement with government agencies, including Cybersecurity and Infrastructure Security Agency (CISA), FBI and the Department of Justice’s National Security Division to prevent the national conventions of the two major political parties in the US.
Experts find that in a few short years the quality of deepfake videos has improved, while the cost to create them is lower. Advancements in Generative Adversarial Network (GAN) is a major reason behind very high-quality deepfakes that are nearly impossible to identify. GAN is a machine learning configuration in which two distinct ML systems compete to improve the learning of a task. One – the generator – learns how to produce some kind of output data, while the other – the discriminator – learns to find flaws in the generator’s output. As each side improves, the other is forced to “raise its game” to compensate, and this is facilitated by the discriminator cueing the generator to the flaws it discovers.
In a parallel development, Microsoft has recently announced the launch of a tool aimed at spotting synthetic media (or deepfakes). The tool, called Video Authenticator, analyses videos and still photos to generate a manipulation score. It provides what Microsoft calls “a percentage chance, or confidence score” that the media has been artificially manipulated. Facebook has also developed a deepfake detector which served up results that were better than guessing — but only just in the case of a data-set the researchers hadn’t had prior access to.
The Microsoft tool works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be “detectable by the human eye.”If a piece of online content looks real but ‘smells’ wrong, chances are it’s a high-tech manipulation trying to pass as real – perhaps with a malicious intent to misinform people. The Video Authenticator tool was created using a public dataset from Face Forensic++ and tested on the DeepFake Detection Challenge Dataset, “both leading models for training and testing deepfake detection technologies” – as mentioned in the announcement by Microsoft. The tech giant is partnering with the San Francisco-based AI Foundation to make the tool available to organisations involved in the democratic process this year – including news outlets and political campaigns.
In a cautionary tone,Microsoft’s blog post warns the tech may offer only passing utility in the AI-fuelled disinformation arms race: “The fact that deepfakes are generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology. However, in the short run, such as the upcoming U.S. election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes,” it states.
Developed by its R&D division, Microsoft Research, in coordination with its Responsible AI team and an internal advisory body on AI, Ethics and Effects in Engineering and Research Committee – Microsoft is trying to protect democracy from threats posed by disinformation. The company notes that, “in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media. There are few tools today to help assure readers that the media they’re seeing online came from a trusted source and that it wasn’t altered.”
Microsoft is also working on a system that will enable content producers to add digital hashes and certificates to media that remain in their metadata as the content travels online – providing a tool for authenticity check. Paired with a reader tool, perhaps in the form of a browser extension, for certificates could be checked and the hashes matched, providing the content consumer what Microsoft calls “a high degree of accuracy” – an assurance that the content hasn’t been altered since it was generated first. Additionally, the certificate will also carry the details of the producer of the content – a kind of digital watermarking.
While work on technologies to identify deepfakes continues, the blog post also emphasizes the importance of media literacy. It encourages exploring ways to make the population “learn about synthetic media, develop critical media literacy skills and gain awareness of the impact of synthetic media on democracy”.