COVID-19 creates deepfake trend in film, fashion and media

COVID-19 creates deepfake trend in film, fashion and media

It was recently reported that Coronavirus lockdowns, have created an innovative trend of using deep fake technology for shooting realistic video was it is becoming more difficult, and in some locations, dangerous to carry out actual shooting. COVID-19 has made it difficult for companies to shoot instructional videos, promotional materials, and other similar footage. As a workaround, some companies are looking into deepfake technology to create training videos for their staff. Quite a few companies are using deep fake technology to generate modelling photos, and, in fact, a service has been recently launched that puts photographed clothes on realistic looking models.

Disney is reportedly working on deepfake technology to improve quality on the big screen at megapixel resolution. This technology is for when “a character needs to be portrayed at a younger age or when an actor is not available or is perhaps even long deceased.” Although the results aren’t perfect, the Disney model’s level of quality stands out. To put it into perspective, though current videos look best as 256 x 256 pixels, Disney’s model can produce quality deepfake examples up to 1024 x 1024.

The rapid technological advancements and innovative breakthroughs are making it more difficult to discern between real and fake content. In recent years, the emergence of deepfake media has created more concern over the original content publishing as this technology has the potential to manipulate people by making synthetic or fake news of original videos and images. Deepfakes have seen an incredible rise, especially during election periods. With the increasing reach and speed of social media, convincing deepfakes can quickly reach millions of people and have pessimistic impacts on society.

According to a deepfake detection technology firm Deep Trace Lab, deepfake videos identified online doubled in just six months since January 2020. The firm’s report revealed that since July 2019, nearly 95 percent of all the deepfake videos identified targeted the film, fashion, sports and media industries. Deep Trace further found that there are currently 14,678 deepfake videos online, 96 percent of which are pornographic, containing faces of famous female actresses. More than 90 percent of deepfake YouTube videos featured Western subjects, from actresses to musicians and politicians to corporate faces.

Deepfakes are not only a major concern in western societies, but the trend is expanding rapidly in Asia too. For instance, this year China saw the first appearance of deepfake in Chinese Web TV series. On the other side, in India, a day before the Delhi election in February, two videos of Delhi unit Bharatiya Janata Party (BJP) President Manoj Tiwari hit the internet where he was found urging voters to vote to his party. The videos were then reported as deepfakes.

When considering most targeted countries of deepfakes, the USA ranks top with 50.1 percent, followed by the UK (10.9 percent), South Korea (9.6 percent), India (5 percent), Japan (4 percent), and others (20.4 percent), according to data from Deep Trace Labs.

The deepfake technique typically is Artificial Intelligence-generated videos or audios of real people doing and saying fictional things. It is often used to create fake news and even perform cyber frauds. Its impact can majorly be seen in the political sphere, aiming to influence voters or criticize other candidates. The proliferation of social media platforms is a key enabler to the rise of deepfakes or fake news circulation. In December last year, social networking giant Facebook removed a network of over 900 fake accounts from its platforms that allegedly used illusory practices to push right-wing ideology online.

Apart from politics, the use of deepfakes has an influential impact on the cybersecurity landscape, propelling entirely new attack vectors. The last couple of years saw several cases where artificial voice audio and images of non-existent, synthetic people were used against businesses and governments. Additionally, experts and analysts foresee the threats in 2020 will become more challenging, as a large number of cybercriminals will use AI to scale up their attacks.

However, together with worries surrounding the misuse of the technology, deepfake detection is also improving. Teams at Microsoft Research and Peking University recently proposed Face X-Ray, a tool that recognizes whether a headshot is fake. The tool detects points in a headshot where one image has been blended with another to create a forgery. Though a step in the right direction, the researchers also state their technology cannot detect a wholly synthetic image. This would make it weak against adversarial samples.

In December of 2019, Facebook announced The Deepfake Detection Challenge in conjunction with tech giants like Microsoft and Amazon. The challenge offered financial rewards for the building of technology to detect manipulated media. Facebook announced the results of the challenge in June, after 2,114 participants submitted more than 35,000 detection algorithms. The winning algorithm was able to spot deepfakes with an average accuracy of 65.18%, showing that although the detection technology is improving, it is still a challenging problem.

Deepfakes are here to stay, the technology of both creating and identifying deepfakes will improve, but it’s always going be a race of one staying ahead of the other. Like in any technology it will have dual use.

COVID-19 creates deepfake trend in film, fashion and media

Leave a comment

Your email address will not be published. Required fields are marked *

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us