How Generative AI Will Remake Cybersecurity

Published

eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

In March, Microsoft announced its Security Copilot service. The software giant built the technology on cutting-edge generative AI – such as large language models (LLMs) – that power applications like ChatGPT.

In a blog post, Microsoft boasted that the Security Copilot was the “first security product to enable defenders to move at the speed and scale of AI.” It was also trained on the company’s global threat intelligence, which included more than 65 trillion daily signals.

Of course, Microsoft isn’t the only one to leverage generative AI for security. In April, SentinelOne announced its own implementation to allow for “real-time, autonomous response to attacks across the entire enterprise.”

Or consider Palo Alto Networks. CEO Nikesh Arora said on the company’s earnings call that Palo Alto is developing its own LLM, which will launch this year. He noted that the technology will improve detection and prevention, allow for better ease-of-use for customers, and help provide more efficiencies.

Of course, Google has its own LLM security system, called Sec-PaLM. It leverages its PaLM 2 LLM that is trained on security use cases.

This is likely just the beginning for LLM-based security applications. It seems like there will be more announcements – and very soon at that.

Also read: ChatGPT Security and Privacy Issues Remain in GPT-4

How LLM Technology Works in Security

The core technology for LLMs is fairly new. The major breakthrough came in 2017 with the publication of the paper “Attention Is All You Need,” in which Google researchers set forth the transformer model. Unlike traditional deep learning systems – which generally analyze words or tokens in small bunches – this technology could find the relationships among enormous sets of unstructured data like Wikipedia or Reddit. This involved assigning probabilities to the tokens across thousands of dimensions. With that approach, the content generated can seem humanlike and intelligent.

This could certainly be a huge benefit for security products. Let’s face it, they can be complicated to use and require extensive training and fine-tuning. But with an LLM, a user can simply create a natural language prompt.

This can help deal with the global shortage of security professionals. Last year, there were about 3.4 million job openings.

“Cybersecurity practices must go beyond human intervention,” said Chris Pickard, Executive Vice President at global technology services firm CAI. “When working together, AI and cybersecurity teams can accelerate processes, better analyze data, mitigate breaches, and strengthen an organization’s posture.”

Another benefit of an LLM is that it can analyze and process huge amounts of information. This can mean much faster response times and a focus on those threats that are significant.

“Using the SentinelOne platform, analysts can ask questions using natural language, such as ‘find potential successful phishing attempts involving powershell,’ or ‘find all potential Log4j exploit attempts that are using jndi:ldap across all data sources,’ and get a summary of results in simple jargon-free terms, along with recommended actions they can initiate with one click – like ‘disable all endpoints,’” said Ric Smith, who is the Chief Product and Technology Officer at SentinelOne.

Ryan Kovar, the Distinguished Security Strategist and Leader of Splunk’s SURGe, agrees. Here are just some of the use cases he sees with LLMs:

  • You can create an LLM of software versions, assets, and CVEs, asking questions like “Do I have any vulnerable software.”
  • Network defense teams can use LLMs of open-source threat data, asking iterative questions about threat actors, like “What are the top ten MITRE TTPs that APT29 use?”
  • Teams may ingest wire data, ask interactive questions like “What anomalous alerts exist in my Suricata logs.” The LLM or generative AI can be smart enough to understand that Suricata alert data is multimodal rather than modal – that is, a Gaussian distribution – and thus needs to be analyzed with IQR (interquartile range) versus Standard Deviation.

Also read: Cybersecurity Analysts Using ChatGPT for Malicious Code Analysis, Predicting Threats

The Limitations of LLMs

LLMs are not without their issues. They are susceptible to hallucinations, which is when the models generate false or misleading content – even as they still seem convincing.

This is why it is critical to have a system that is based on relevant data. Then there will need to be training for helping employees create effective prompts. But there also needs to be human validation and reviews.

Besides hallucinations, there are the nagging problems with the security guardrails for the LLMs themselves.

“There are the potential data privacy concerns arising due to the collection and storage of sensitive data by these models,” said Peter Burke, who is the Chief Product Officer at SonicWall. Those concerns have caused companies like JPMorgan, Citi, Wells Fargo and Samsung to ban or limit the use of LLMs.

There are also some major technical challenges limiting LLM use.

“Another factor to consider is the requirement for robust network connectivity, which might pose a challenge for remote or mobile devices,” said Burke. “Besides, there may be compatibility issues with legacy systems that need to be addressed. Additionally, these technologies may require ongoing maintenance to ensure optimal performance and protection against emerging threats.”

Something else: the hype of ChatGPT and other whiz-bang generative AI technologies may lead to overreliance on these systems. “When presented with a tool that has a wide general range of applications, there’s a temptation to let it do everything,” said Olivia Lucca Fraser, a staff research engineer at Tenable. “They say that when you have a hammer, everything starts to look like a nail. When you have a Large Language Model, the danger is that everything starts to look like a prompt.”

Also read: AI in Cybersecurity: How It Works

The Future of AI Security

LLM-based systems are definitely not a silver bullet. But no technology is, as there are always trade-offs. Yet LLMs do have significant potential to make a major difference in the cybersecurity industry. More importantly, the technology is improving at an accelerating pace as generative AI has become a top priority.

“AI has the power to take any entry-level analyst and make them a ‘super analyst,’” said Smith. “It’s a whole new way to reimagine cybersecurity. What it can do is astounding, and we believe it’s the future of cybersecurity.”

See the Hottest Cybersecurity Startups

Tom Taulli Avatar

Subscribe to Cybersecurity Insider

Strengthen your organization’s IT security defenses by keeping abreast of the latest cybersecurity news, solutions, and best practices.

This field is required This field is required

Get the free Cybersecurity newsletter

Strengthen your organization’s IT security defenses with the latest news, solutions, and best practices. Delivered every Monday, Tuesday and Thursday

This field is required This field is required