Offensive Artificial Intelligence emerges as the latest cyber security threat

Offensive Artificial Intelligence emerges as the latest cyber security threat

As machine-learning applications move into the mainstream, a new era of cyber threat is emerging—one that uses offensive artificial intelligence (AI) to supercharge attack campaigns. With tools like GPT-3 (Generative Pre-trained Transformer), the most powerful language model ever, offensive AI is taking on even more destructive capabilities as human-led defenses are most likely to fail against an AI generated attack.

Less than a year since the launch of its  Open AI’s first commercial product, more than 300 applications are now using GPT-3, and tens of thousands of developers around the globe are building its platform. Given any text prompt like a phrase or a sentence, GPT-3 returns a text completion in natural language.

Developers can “program” GPT-3 by showing it just a few examples or “prompts.” OpenAI has designed the API to be both simple for anyone to use but also flexible enough to make machine learning teams more productive. The potential to weaponize this kind of a technology and turn it into Offensive AI presents the world with some of the most treacherous challenges in cyber security.

Let’s imagine a scenario of an attack that begins with an innocuous email. In this case, the target is a senior manager at one of the largest banks on Wall Street. Having closely monitored the social media interactions of the individual, the AI program has learnt to mimic the style of the person. The AI tool deployed by the attacker analyzed her social media feeds and emails in both professional and social interactions, developing an incredibly precise understanding of the CMO’s everyday communication. It replicates the CMO’s language almost perfectly.

Often generated using reconnaissance from social media, spear-phishing is labor-intensive and costly for cybercriminals–up to 20 times more so than conventional phishing. The upside for the attacker? Tailoring attacks to a specific victim produces, on average, 40-fold the click-through rate of its boilerplate counterpart.

In the case of this hypothetical attack, it’s very likely that the target will find no reason to think twice about a normal email from his CMO: he downloads the email’s attachment—infecting his computer with AI malware. What’s more, the AI-powered malware can evade even the bank’s most advanced cyber defenses. While these tools are programmed to catch every threat known to the security industry, this attack is unique as it can blend in with normal activity. The AI malware remains elusive by continually changing its file name and identifiable features. At the same time, it quietly scans the network for weaknesses.

After locating a vulnerability, it autonomously finds the most effective way to infiltrate the bank’s sensitive financial database, using its own agency, rather than risk “phoning home” to the human criminals for the go-ahead. Finally, the AI siphons off electronic funds to the attackers’ bank accounts in small but regular increments, logging each transfer in the database under the names of the banks’ legitimate third-party suppliers, successfully avoiding detection.

This was in a way predictable as everything becomes driven by an algorithm working in the backend. The use of smart tools to automate the attack process was inevitable. For instance, if a human attacker must spend a lot of time trying different routes into a target network, adapting after each attempt and deciding what to try next, why not teach that process to a piece of software?

“Offensive AI” will enable cybercriminals to direct attacks against enterprises while flying under the radar of conventional, rules-based detection tools. That’s according to a new survey published by MIT Technology Review Insights and Darktrace, which found that more than half of business leaders believe security strategies based on human-led responses are failing.

The MIT and Darktrace report surveyed more than 300 C-level executives, directors, and managers worldwide to understand how they perceive the cyberthreats they’re up against. A high percentage of respondents (55%) said traditional security solutions can’t anticipate new AI-driven attacks, while 96% said they’re adopting “defensive AI” to remedy this. Here, “defensive AI” refers to self-learning algorithms that understand normal user, device, and system patterns in an organization and detect unusual activity without relying on historical data.

Sixty-eight percent of the executives surveyed expressed concern about attacks employing AI for impersonation and phishing, while a smaller majority said they’re worried about more effective ransomware (57%), misinformation and the undermining of data integrity (56%), and the disruption of remote workers by targeting home networks (53%). Of the respondents, 43% underlined the damaging potential of deepfakes, or media that takes a person in an existing image, audio recording, or video and replaces them with someone else’s likeness using AI.

As the report’s co-authors write, when offensive AI is thrown into the mix, “fake email” could become nearly indistinguishable from trusted contact messages. And with employees working remotely during the pandemic — without the security protocols of the office — organizations have seen successful phishing attempts skyrocket. Google registered over 2 million phishing websites since the start of 2020, when the pandemic began — a 19.91% increase compared with 2019.

Businesses are increasingly placing their faith in defensive AI to combat the growing cyberthreats. Known as an autonomous response, defensive AI can interrupt in-progress attacks without affecting day-to-day business. For example, given a strain of ransomware an enterprise hasn’t encountered in the past, defensive AI can identify the novel and abnormal patterns of behavior and stop the ransomware even if it isn’t associated with publicly known compromise indicators (e.g., blacklisted command-and-control domains or malware file hashes).

According to the survey, 44% of executives are assessing AI-enabled security systems and 38% are deploying autonomous response technology. This agrees with findings from Statista. In a 2019 analysis, the firm reported that around 80% of executives in the telecommunications industry believe their organization wouldn’t be able to respond to cyberattacks without AI. Reflecting the pace of adoption, the AI in the cybersecurity market will reach $38.2 billion in value by 2026, Markets and Markets projects. That’s up from $8.8 billion in 2019, representing a compound annual growth rate of around 23.3%.

Just as cyber criminals are using Offensive AI to power their attacks, it is but natural that organizations will also think of using Defensive AI to autonomously tackle various cyber-attacks as well. This reduces the workload for human operations and increases the efficiency of dealing with large numbers of cyber-attacks. With the onset of AI-powered attacks, organizations need to reform their strategies quickly, be prepared to defend their digital assets with AI, and regain the advantage over this new wave of sophisticated attacks. By automating the processes of threat detection, investigation and response, AI augments human IT security teams by stopping threats as soon as they emerge, so that people have the time to focus on more strategic tasks at hand.

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us