ChatGPT is dark web’s hottest topic

ChatGPT is dark web’s hottest topic

Dark use of generative artificial intelligence programs was inevitable, and it seems like that it has already started

One of the most powerful capabilities of ChatGPT is to constantly improve its responses. Experts say that this can easily be used to create and continually mutate injectors. By continuously querying the chatbot and receiving a unique piece of code each time, it is possible to create a polymorphic program that is highly evasive and difficult to detect. The weaponizing of generative artificial intelligence programs like ChatGPT was inevitable in a way, and it seems like that it has already started.

Security firm NordVPN, ranked as among the best VPN’s for ChatGPT, has carried out an analysis that shows the exploitation of ChatGPT as “the dark web’s hottest topic”.ChatGPT, as a powerful language model, has the ability to generate natural language text that can be used in a variety of malicious activities, including cyber scams. One of the main ways that ChatGPT is being utilised in cybercrime is through the creation of malware code and phishing emails. 

OpenAI trying to prevent harmful use

ChatGPT does have safeguarding features built into its programming, with the aim of preventing bad actors from using it this way. When asked to write code for a ransomware program, ChatGPT will refuse – claiming “my purpose is to provide information and assist users… not to promote harmful activities.” This has set off a cat-and-mouse game between OpenAI and bad actors, each trying to outwit the other.

Obviously, the use of ChatGPT for malicious purposes is not endorsed by OpenAI, and the company has taken steps to prevent its technology from being used in harmful ways. However, as with any technology, it can be used for good or bad purposes, and it’s important to be vigilant and aware of potential scams when using the internet.

Malware & phishing

Yet, despite OpenAI’s attempt to thwart bad actors, the net is abuzz with topics around how to use generative artificial intelligence (AI). ChatGPT can be used to generate malware code that can be disguised as a legitimate program, tricking users into downloading and installing it on their computers. Once installed, the malware can steal sensitive information, such as login credentials and financial information, or harm the affected computer by deleting files or rendering it unusable. 

Another way that ChatGPT is being used in cyber scams is through the creation of phishing emails. Phishing is a type of social engineering attack that involves tricking individuals into revealing sensitive information by posing as a trustworthy entity, such as a bank or a well-known company. With ChatGPT’s ability to generate realistic and convincing emails, scammers can easily trick victims into clicking on malicious links or providing personal information. 

Polymorphic program

According to experts,ChatGPT could easily be used to create polymorphic malware. Polymorphic malware is a type of malware that constantly changes its identifiable features in order to evade detection. This malware’s advanced capabilities can easily evade security products and make mitigation cumbersome with very little effort or investment by the adversary. By continuously querying the chatbot and receiving a unique piece of code each time, it is possible to create a polymorphic program that is highly evasive and difficult to detect.

Ransomware possibilities

Checkpoint Research recently revealed how underground marketplaces are buzzing with conversations on how ChatGPT can be used to write malware code. One threat actor used ChatGPT to create malware that hunted common file types, copied them onto a random folder, compressed them and then uploaded the files to a hardcoded FTP server. Another threat actor (who supposedly had limited developer skills) created an encryption algorithm that can encrypt all files in a specified directory. Both these examples illustrate how even those with limited technical experience can use advanced AI to build core elements of ransomware-type programs.

Implications For Vulnerable Code and Software

Uncovering hidden vulnerabilities is not easy. Hackers often spend a lot of time and energy going through each line of code, trying to understand what each flag, keyword or section does. With ChatGPT, hackers can now take a snippet of code and ask the chatbot to analyse it and provide a high-level summary of what each module does. Such technology has the potential to significantly empower seasoned attackers and reduce the barrier to entry for other potential adversaries. A smart contract auditing firm recently demonstrated how it was able to find weaknesses in smart contract code using ChatGPT.

Rouge actors: Astroturf

Rogue actors, anti-Western governments and political opponents are often engaged in systematic campaigns to “astroturf” – repeatedly spreading false narratives across multiple accounts to attract consensus. For example, Twitter bots have been widely criticised for posting repeated content in suspicious patterns. An AI-like ChatGPT designed to mimic humans can produce an infinite supply of content on a range of topics. Triggered by certain keywords or controversial topics, the chatbot can post to an indefinite number of social accounts. Such bots will be indistinguishable from human users, elevating disinformation to a whole new level.

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us