The issue is all the more disturbing of late as tools for AI-generation become increasingly widespread and accessible to the general public
A perpetrator used AI-powered face-swapping technology to impersonate a friend of the victim during a video call and receive a transfer of CNY 4.3 million (US$622,000) in the city of Baotou, in the region of Inner Mongolia. Cybercrime quickly adopts new technologies. One of the most concerning trends is the rise of deepfakes – forged images, audio or video created with the aid of artificial intelligence, which makes them appear absolutely real – at least to the naked eye, per a report published by Kaspersky Lab, a multinational cybersecurity and anti-virus provider headquartered in Moscow, Russia and operated by a holding company in the United Kingdom.
The issue is all the more disturbing of late as tools for AI-generation become increasingly widespread and accessible to the general public. At the same time, AI technologies are forever increasing in (breath-taking) sophistication with each new version, and now allow the creation of impossibly realistic-looking pictures and extremely convincing audio.
Deepfake threat on the rise
According to a report in the World Economic Forum in 2022, 66% of cybersecurity professionals experienced deepfake attacks within their respective organisations. An example of deepfake crime includes the creation of fake audio messages from CEOs or other high-ranking company executives, using voice-altering software to impersonate them. These manipulated audio messages often contain urgent requests for the recipient to transfer money or disclose sensitive information.
Kaspersky found that there’s a significant demand for deepfakes – which far outweighs the supply of them. Individuals who’ll agree to create fake videos are being desperately searched for. And this is quite disturbing, since, as we all know, demand creates supply; thus, we predict that in the nearest future we’ll indeed see a significant increase in incidents involving high-quality deepfakes. And judging by the content of dark-web forum posts, cybercriminals are seeking high-quality results. Despite open availability of deepfake creation tools, crooks are looking only for creators who can produce high-quality videos with perfect sound and no lags between video and audio.
Crypto-scam leads deepfake crimes
A significant proportion of deepfake search ads is related to this or that crypto-scam. Usually, those are connected with cryptocurrency giveaway scams, but sometimes we’ve seen more peculiar ads. For example, we came across a post that was seeking a professional who could create a high-quality deepfake video that could be used to bypass Binance’s face-recognition verification system. So, cybercriminals are trying to use deepfakes to circumvent biometric security systems and access victims’ accounts to steal money directly.
Banking sector highly vulnerable
The banking sector, according to the report published inWEF, is particularly concerned by deepfake attacks, with 92% of cyber practitioners worried about its fraudulent misuse. Services, such as personal banking and payments, are of particular apprehension and such concerns are not baseless. To illustrate, in 2021, a bank manager was tricked into transferring $35 million to a fraudulent account.The high cost of deepfakes is also felt across other industries. In the past year, 26% of smaller and 38% of large companies experienced deepfake fraud resulting in losses of up to $480,000.
Think like an attacker
To build an effective threat intelligence program, companies, including those with established Security Operations Centres, must think like an attacker, identifying and protecting the most likely targets. Deriving real value from a threat intelligence program requires a very clear understanding of what the key assets are, and what data sets and business processes are critical to accomplishing the organisation’s objectives.
Identifying these ‘crown jewels’ allows companies to establish data collection points around them to further map the collected data with externally available threat information. Considering the limited resources that information security departments usually have, profiling an entire organisation is a massive undertaking. The solution is to take a risk-based approach, focusing on the most susceptible targets first. Once internal threat intelligence sources are defined and operationalised, the company can start thinking about adding external information into its existing workflows.
The Digital Trust initiative
A couple of years ago, the World Economic Forum launched the Digital Trust initiative to help solve the digital trust challenge. The key question the initiative asked was: How can leaders make better, more trustworthy decisions regarding technology?WEF convened representatives of the world’s largest tech and consumer-focused companies alongside government representatives and leading consumer advocates to create a framework for companies to commit to earning digital trust.
The Forum’s digital trust framework shows for the first time how leadership commitment to cybersecurity, privacy, transparency, redressability, auditability, fairness, interoperability and safety can improve both citizen and consumer trust in technology and the companies that create and use new technologies. The Forum’s report provides both a framework and a roadmap for how to become more trustworthy in the use and development of technology.
The good news is that the same artificial intelligence technologies that are helping create deepfakes, can be used to distinguish genuine videos, pictures and audio from the fakes. And such tools are slowly emerging on the market. It is possible that in the nearest future media outlets, messengers and maybe even browsers will be equipped with such technologies.
Know more about the syllabus and placement record of our Top Ranked Data Science Course in Kolkata, Data Science course in Bangalore, Data Science course in Hyderabad, and Data Science course in Chennai.