One researcher used ChatGPT to create dangerous data-stealing malware

In context. Since its launch last year, ChatGPT has been making waves among tech enthusiasts with its ability to write articles, poems, movie scripts and more. An AI tool can even generate functional code as long as it is given a well-written and clear prompt. While most developers will use the feature for completely harmless purposes, a new report says it could also be used by malicious players to create malware, despite the safeguards set by OpenAI.

A cybersecurity researcher claims to have used ChatGPT to develop a zero-day exploit that can steal data from a compromised device. Alarmingly, the malware even escaped detection by all VirusTotal vendors.

Forcepoint’s Aaron Mulgrew said he chose not to write the code himself at the start of the malware development and only use advanced techniques commonly used by sophisticated threat actors such as rogue nation states.

Describing himself as a “newbie” to malware development, Mulgrew said he used the Go implementation language not only for its ease of development, but also because he could manually debug the code if needed. He also used steganography, which hides confidential data in a plain file or message to avoid detection.

Mulgrew began by directly asking ChatGPT to develop the malware, but that triggered the chatbot’s guardrails and he expressly refused to take on the task on ethical grounds. He then decided to get creative and asked the AI ​​tool to generate small snippets of helper code before putting together the entire executable manually.

This time, he was successful in his endeavours, with ChatGPT generating the controversial code that bypassed detection by all VirusTotal antivirus applications. However, scrambling the code to avoid detection was difficult, as ChatGPT recognizes such requests as unethical and refuses to fulfill them.

However, Mulgrew was only able to do so after several attempts. The first time the malware was uploaded to VirusTotal, five vendors flagged it as malicious. After several tweaks, the code was successfully obfuscated and none of the vendors identified it as malware.

Mulgrew said the entire process took “just a couple of hours.” Without the chatbot, he believes it would have taken a team of five to 10 developers weeks to create the malicious software and avoid detection by security applications.

While Mulgrew created the malware for research purposes, he said a theoretical zero-day attack using such a tool could target high-net-worth individuals to extract important documents on the C drive.

Source link