Artificial intelligence (AI) like ChatGPT has been a global sensation since early 2023, but this AI is not always used for positive purposes. Recently, a security expert found a way to ask ChatGPT to create malicious code during testing.
Aaron Mulgrew, a security expert at Forcepoint, shared the risk of writing malware using OpenAI's language chatbot. Although ChatGPT was designed to prevent users from asking AI to design malware, Aaron still found a loophole by creating prompts for the AI to write programming code line by line. When combined together, Aaron realized that he had in his hands an undetectable data theft execution tool, so sophisticated that it was comparable to today's most sophisticated malware.
Each individual command line generated by ChatGPT, when combined, can become sophisticated malware.
Mulgrew's discovery is a wake-up call about the potential for AI to be used to create dangerous malware without the need for a hacker group, or even the tool's creator writing a single line of code.
Mulgrew's software is disguised as a screen saver application, but is capable of automatically activating on Windows-based devices. Once in the operating system, the malware "infiltrates" every file, including Word text editors, image files, and PDFs, searching for data to steal.
Once it has what it needs, the program breaks down the information and attaches it to image files on the machine. To avoid detection, the images are uploaded to a folder on Google Drive. The malware is so powerful because Mulgrew can tweak and enhance its features to avoid detection through simple commands entered into ChatGPT.
Although this was a private test by security experts and no attacks were carried out outside of the testing scope, the cybersecurity community still recognizes the danger of activities using ChatGPT. Mulgrew claims that he does not have much experience in programming, but OpenAI's artificial intelligence is still not strong and smart enough to prevent his test.
Source link
Comment (0)