The rise of AI technology has opened the door for more nefarious uses, including the spread of malware. Even the traditional scam email has evolved with the help of AI models that don’t make the usual grammar and spelling mistakes. This was highlighted in a recent advisory report from Europol, which warned that persuasive, authentic-sounding text required for phishing attacks and other security threats can now be generated quite easily, with no human effort required, and endlessly tweaked and refined for specific audiences.
However, it’s important to note that developer OpenAI has built safeguards into ChatGPT. If prompted to “write malware” or a “phishing email,” the AI will explain that it’s programmed to follow strict ethical guidelines that prohibit it from engaging in any malicious activities, including writing or assisting with the creation of malware. ChatGPT won’t code malware for you, but it’s polite about it.
Although AI bots can be used for more sinister purposes, it is not too difficult to imagine criminals developing their own language models and tools to make their scams more convincing. AI bots can produce natural-sounding text, audio, and video tailored to specific audiences quickly and constantly on demand. However, there are still ways to minimize the chances of getting scammed by AI-powered threats. There are two types of AI-related security threats to think about. The first involves tools like ChatGPT or Midjourney being used to get you to install something you shouldn’t, while the second is more dangerous – AI used to create text, audio, or video that sounds convincingly real. To avoid falling into these traps, it is important to stay up to date with what’s happening with AI services, always go to the original source first, and double-check wherever possible using different methods. Take your time and watch out for red flags.
Writing Malware
According to Japanese cybersecurity experts, it has been discovered that ChatGPT can be prompted to write code for malware by entering a certain prompt that tricks the AI into thinking it is in developer mode. ChatGPT was launched as a prototype by OpenAI in November 2022, powered by a machine learning model designed to emulate human responses.
However, it was designed to not respond to certain topics such as adult content, sexual questions, and malicious activities. Since its release, cybercriminals have been studying its responses and attempting to manipulate it for criminal purposes. The full extent of the risks posed by ChatGPT is still unclear.
Takashi Yoshikawa, an analyst at Mitsui Bussan Secure Directions, warned that the ability for viruses to be created in a matter of minutes purely through Japanese language conversation is a significant threat to society. He urged AI developers to prioritize measures to prevent misuse.
The first local government to trial ChatGPT was reported to be Yokosuka, Kanagawa Prefecture. However, during the experiment, ChatGPT was able to write the code for malware in just a few minutes and successfully attacked an experimental PC.
This discovery raises questions about the potential for cybercriminals to exploit ChatGPT for malicious purposes, and its complete potential is yet to be explored by researchers. It is crucial to examine the negative potential limits of using ChatGPT to avoid it being used for criminal activities.
Additionally, there is a possibility that threat actors have already been aware of this capability and have been using it for malicious purposes, though this has yet to be confirmed.
Go to Source
Author: Guru