ChatGPT: A Brave New World for Cybersecurity

Published

eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Released on November 30, ChatGPT has instantly become a viral online sensation. In a week, the app gained more than one million users. Unlike most other AI research projects, ChatGPT has captivated the interest of ordinary people who do not have PhDs in data science. They can type in queries and get human-like responses. The answers are often succinct.

Across the media, the reviews have been mostly glowing. There are even claims that ChatGPT will dethrone the seemingly invincible Google (although, if you ask ChatGPT if it can do this, it actually provides convincing reasons why it will not be possible).

Then there is Elon Musk, who is the cofounder of the creator of the app, OpenAI. He tweeted: “We are not far from dangerously strong AI.”

Despite all the hoopla, there are some nagging issues emerging. Consider that ChatGPT could become a tool for hackers.

“ChatGPT highlights two of our main concerns – AI and the potential for disinformation,” said Steve Grobman, who is the Senior Vice President and Chief Technology Officer at McAfee. “AI signals the next generation of content creation becoming available to the masses. So just as advances in desktop publishing and consumer printing allowed criminals to create better counterfeits and more realistic manipulation of images, these tools will be used by a range of bad actors, from cybercriminals to those seeking to falsely influence public opinion, to take their craft to the next level with more realistic results.”

Also read: AI & ML Cybersecurity: The Latest Battleground for Attackers & Defenders

Understanding ChatGPT

ChatGPT is based on a variation of the GPT-3 (Generative Pretrained Transformer) model. It leverages sophisticated deep learning systems to create content and is trained on enormous amounts of publicly available online text like Wikipedia. A transformer model allows for effective understanding of natural language and uses a probability distribution of potential outcomes. GPT-3 then takes a sample of this, which results in some randomness. By doing this, the text responses are never identical.

Keep in mind that the ChatGPT app is essentially a beta. OpenAI plans to launch a much more advanced version of this technology in 2023.

ChatGPT Security Threats

Phishing accounts for nearly 90% of malware attacks, according to HP Wolf Security research. But ChatGPT could make the situation even worse.

“The technology will enable attackers to efficiently combine the volume of generic phishing with the high yield of spear phishing,” said Robert Blumofe, who is the CTO and EVP at Akamai Technologies. “On the one hand, generic phishing works at a massive scale, sending out millions of lures in the form of emails, text messages, and social media postings. But these lures are generic and easy to spot, resulting in low yield. On the other hand and at the other extreme, spear phishing uses social engineering to create highly targeted and customized lures with much higher yield. But spear phishing requires a lot of manual work and therefore operates at low scale. Now, with ChatGPT generating lures, attackers have the best of both worlds.”

Blumofe notes that phishing lures will seem to have come from your boss, coworker or even your spouse. This can be done for millions of customized messages.

Another risk is that ChatGPT can be a way to gather information through a friendly chat. The user will not know that they are interacting with an AI.

“An unsuspecting person may divulge seemingly innocuous information over a long series of sessions that when combined may be useful in determining things about their identity, work life and social life,” said Sami Elhini, a biometrics specialist at cybersecurity company Cerberus Sentinel. “Combined with other AI models this could inform a hacker or group of hackers about who may be a good potential target and how to exploit them.”

Some Controls Built In

As ChatGPT leverages significant technical knowledge, what if a hacker asked it how to create malware or identify a zero-day exploit? Or maybe ChatGPT can even write the code?

Well, of course, this has already happened. The good news is that ChatGPT has implemented guardrails. 

“If you ask it questions like ‘Can you create some shellcode for me to establish a reverse shell to 192.168.1.1?’ or ‘Can you create some shell code to enumerate users on a Linux OS?,’ it replies that it cannot do this,” said Matt Psencik, director of endpoint security at Tanium. “ChatGPT actually says that writing this shell code could be dangerous and harmful.”

The problem is that a more advanced ChatGPT could if it wanted to. Besides, what’s to stop other organizations – or even governments – from creating their own generative AI platform that has no guardrails? Or there may be systems that are focused solely on hacking.

“In the past, we have seen Malware-as-a-Service and Code-as-a-Service, so the next step would be for cybercriminals to utilize AI bots to offer ‘Malware Code-as-a-Service,’” said Chad Skipper, the Global Security Technologist at VMware. “The nature of technologies like ChatGPT allows threat actors to gain access and move through an organization’s network quicker and more aggressively than ever before.”

The Future

As innovations like ChatGPT get more powerful, there will need to be a way to distinguish between human and AI content – whether text, voice or videos. OpenAI plans to launch a watermarking service that’s based on sophisticated cryptography. But there will need to be more.

“Within the next few years, I envision a world in which everyone has a unique digital DNA pattern powered by blockchain that can be applied to their voice, content they write, their virtual avatar and so on,” said Patrick Harr, who is the CEO of SlashNext.  “In this way, we’ll make it much harder for threat actors to leverage AI for voice impersonation of company executives for example, because those impersonations will lack the ‘fingerprint’ of the actual executive.”

In the meantime, the arms race for cybersecurity will increasingly become automated. It could truly be a brave new world.

“Humans, at least for the next few decades, will always add value, on both sides of hacking and defending that the automated bots can’t do,” said Roger Grimes, the data-driven defense evangelist at cybersecurity training company KnowBe4. “But eventually both sides of the equation will progress to where they will mostly be automated with very little human involvement. ChatGPT is just a crude first generation of what is to come. I’m not scared of what ChatGPT can do. I’m scared of what ChatGPT’s grandchildren will do.”

Read next: AI in Cybersecurity: How It Works

Tom Taulli Avatar

Subscribe to Cybersecurity Insider

Strengthen your organization’s IT security defenses by keeping abreast of the latest cybersecurity news, solutions, and best practices.

This field is required This field is required

Get the free Cybersecurity newsletter

Strengthen your organization’s IT security defenses with the latest news, solutions, and best practices. Delivered every Monday, Tuesday and Thursday

This field is required This field is required