According to reports, security experts have discovered a way to manipulate the popular AI chatbot, ChatGPT, into creating malicious code. All that’s required is a few carefully worded questions and an authoritative tone, allowing users to obtain the desired results.
However, specific prompts and settings must be provided in order for ChatGPT to generate code, and its built-in content filtering system prevents it from responding to queries regarding unsafe topics such as code injection. Nevertheless, this system can be easily bypassed.
A group known as CodeBlue29 utilized ChatGPT to generate a sample of ransomware, which they subsequently employed to test various EDR solutions in order to determine which product was best suited for their company. Despite having limited programming experience, they were able to piece together working code in Python by using ChatGPT’s generated fragments.
When testing the ransomware against multiple EDR solutions, one vendor’s defenses were successfully circumvented by the malware. CodeBlue29 was able to report the issue to the EDR vendor via their bug bounty program, resulting in the problem being resolved.
Yielded the result below
Researchers from CyberArk note that “Remarkably, we were able to obtain functional code by instructing ChatGPT to accomplish the same task with various restrictions and insisting it comply.”
The researchers pointed out that the key is not just asking ChatGPT for instructions on creating ransomware, but rather asking it in a step-by-step manner that a typical programmer would follow, such as how to navigate through directories, how to encrypt files, and how to check progress. This approach is a smart way to outsmart ChatGPT.
ChatGPT as a Tool for Research and Analysis
Furthermore, experts predict that utilizing this technique may aid in the growth of chat GPT while also limiting the creation of malicious software. While this process may already be in place, it is not desirable to live in a world where anyone can instruct a computer, including children, to generate harmful malware.
Moreover, this tool may also be used by researchers to thwart attacks, and developers can use it to enhance their software. However, creating malware is a more natural task for AI than detecting it.
An API provided by ChatGPT enables third-party applications to interact with the AI and obtain responses programmatically rather than through the web interface. As a result, several individuals have utilized this API to create impressive open-source analysis tools that can assist cybersecurity researchers in their work.
The researchers emphasized that this is not just a theoretical issue, but a genuine concern that must be addressed. As a result, it is necessary to remain aware and vigilant in this constantly changing field.
Author: Guru
Go to Source
Author: Balaji N