ChatGPT has raised alarm among cybersecurity researchers for its unnerving ability in composing everything from sophisticated malware to phishing lures – but it’s important to keep in mind that the tool can help support cybersecurity defenses as well.
Shiran Grinberg, director of research and cyber operations at Cynet, told eSecurity Planet that too many companies are deterred by ChatGPT, rather than encouraging employees to leverage its functionality. “After all, I doubt you’ll find a manager today who won’t encourage his employees to use Google when searching for information, unless you are a citizen of China, Russia, North Korea, or Iran,” he said.
- ChatGPT: A Brave New World for Cybersecurity
- ChatGPT’s Dark Side: An Endless Supply of Polymorphic Malware
ChatGPT’s Good Security Uses
And ChatGPT’s security benefits, Grinberg said, are significant. “Let’s assume you have a team of analysts and you encourage them to use ChatGPT in order to come to conclusions and look up all kinds of information,” he said. “You can actually put a piece of code into ChatGPT and ask it to identify the malicious part in it – so indeed, it can aid a lot.”
Analysts, responders, and investigators, Grinberg said, can use ChatGPT to assemble a detailed incident response report, aligned with SANS methodology, in minutes. They can then fill the report with analysis of malicious code, scripts and different malware functions, “all done with the help of ChatGPT,” he said.
Grinberg said Cynet has already started leveraging ChatGPT to future-proof its defenses. “We are able to take a machine learning model and to turn it into an AI mechanism which basically learns many types of legitimate files versus many malicious files,” he said. “By using huge amounts of data, we can conclude which future files would be malicious versus legitimate.”
However, one security researcher – Chris Anley, chief scientist at NCC Group – cautions that using ChatGPT for security code analysis can result in inaccuracies and requires expert review.
As Anley noted in one code analysis example, “this output is stunning at first sight, but has some problems that require human understanding and careful revision for the output to be trusted.”
On the other hand, the tool can help malicious users generate threats that would previously have been beyond their abilities. “The main threat lies in the fact that ChatGPT makes life easier and simpler for threat actors in terms of creating an attack with little previous knowledge of technical capabilities,” Grinberg said.
It’s relatively easy, he said, for threat actors to leverage ChatGPT to develop malicious code. Even though ChatGPT will prompt you if you are trying to do something illegal, Grinberg said, “you can trick it by playing scenarios and it will give you code that with a few tweaks can be malicious.”
A group on Reddit, for example, has been working on jailbreaks that bypass ChatGPT controls.
To some degree, bringing advanced hacking techniques to the less technical is not a new issue – across the board, the average age of attackers is decreasing as they find the barriers to entry falling away. “There are all kinds of available services today, like malware-as-a-service, ransomware-as-a-service, initial-access-as-a-service, and now ChatGPT, which allows the expert hacker to all of a sudden be able to write malicious code and carry out attacks,” Grinberg said.
ChatGPT Security Tools Coming?
It’s also worth noting, Grinberg said, that new tools are increasingly available to help determine whether content was generated by AI – examples include the OpenAI AI Text Classifier, the Content at Scale AI Detector, and the GPT-2 Output Detector. “My estimation is that we will see additional tools and solutions aimed at the detection of malicious content crafted by AI models,” he said.
Grinberg said the broader lesson is simple: don’t ignore the threats. “Work on empowering your employees, encourage them to interact with this new technology, intertwine it with cyber security awareness training,” he said. “It will be fun, engaging, and memorable.”