WordPress Ad Banner

GPT-4 Can’t Stop Helping Hackers Make Cyber Criminal Tools

OpenAI released the latest version of its machine learning software, GPT-4, to great fanfare this week. One of the features the company highlighted about the new version was that it was supposed to have rules protecting it from cybercriminal use. Within a matter of days, though, researchers say they have tricked it into making malware and helping them craft phishing emails, just as they had done for the previous iteration of OpenAI’s software, ChatGPT. On the bright side, they also were able to use the software to patch holes in cyber defenses, too.

Researchers from cybersecurity firm Check Point showed Forbes how they got around OpenAI blocks on malware development by simply removing the word “malware” in a request. GPT-4 then helped them create software that collected PDF files and sent them to a remote server. It went further, giving the researchers advice on how to make it run on a Windows 10 PC and make it a smaller file, so it was able to run more quickly and have a lower chance of being spotted by security software.

To have GPT-4 help craft phishing emails, the researchers took two approaches. In the first, they used GPT-3.5, which didn’t block requests to craft malicious messages, to write a phishing email impersonating a legitimate bank. They then requested GPT-4, which had initially refused to create an original phishing message, to improve the language. In the second, they asked for advice on creating a phishing awareness campaign for a business and requested a template for a fake phishing email, which the tool duly provided.

“GPT-4 can empower bad actors, even non-technical ones, with the tools to speed up and validate their activity,” the Check Point researchers noted in their report, handed to Forbes ahead of publication. “What we’re seeing is that GPT-4 can serve both good and bad actors. Good actors can use GPT-4 to craft and stitch code that is useful to society; but simultaneously, bad actors can use this AI technology for rapid execution of cybercrime.”

“GPT-4 can empower bad actors, even non-technical ones, with the tools to speed up and validate their activity,” the Check Point researchers noted in their report, handed to Forbes ahead of publication. “What we’re seeing is that GPT-4 can serve both good and bad actors. Good actors can use GPT-4 to craft and stitch code that is useful to society; but simultaneously, bad actors can use this AI technology for rapid execution of cybercrime.”