Cybersecurity is getting a major upgrade, thanks to a new wave of AI that doesn’t just analyze threats. It acts on them.
Termed “agentic AI,” this technology is transforming how businesses defend against cyberattacks. It offers faster responses and smarter automation while also introducing new challenges. In a recent blog post, David Reber Jr., chief security officer at NVIDIA, laid out how its AI security tools are helping companies both defend against and with agentic AI.
“Agentic AI is redefining the cybersecurity landscape — introducing new opportunities that demand rethinking how to secure AI while offering the keys to addressing those challenges,” Reber wrote in the article titled “How Agentic AI Enables the Next Leap in Cybersecurity.”
Helping cybersecurity teams do more, faster
Security operations centers (SOCs) worldwide are facing growing alert volumes and a shortage of skilled cybersecurity professionals.
That’s where agentic AI comes in.
These AI agents can act as smart helpers, handling repetitive tasks and allowing security analysts to focus on the biggest threats. For example, they can assess software vulnerabilities in seconds, pulling data from multiple sources and prioritizing risks.
“They can search external resources, evaluate environments and summarize and prioritize findings so human analysts can take swift, informed action,” Reber wrote.
Big names like Deloitte already use NVIDIA’s AI stack, including tools like NVIDIA Morpheus and NIM, to speed up software patching and vulnerability management. Amazon Web Services has also partnered with NVIDIA to create an open-source system for handling software security on the cloud.
CrowdStrike’s Charlotte AI Detection Triage system uses agent-based AI to cut alert triage times in half. According to NVIDIA, the tool offers “2x faster detection triage with 50% less compute,” improving response times and reducing analysts’ fatigue.
Guarding the guardians
However, letting AI systems act on their own brings new challenges. Agentic AI doesn’t just look at data; it acts on it, sometimes making decisions in real-time with real consequences. Security must be built into the AI to prevent unintended or harmful behavior.
This is where runtime controls and pre-deployment testing come into play. Red teaming is being used to probe AI agents for weaknesses before they go live. Reber said one tool called Garak can test AI agents for vulnerabilities like prompt injection, reasoning errors, or misuse of digital tools.
Runtime “guardrails” are critical for agents already in use. NVIDIA’s NeMo Guardrails lets developers set limits on what AI agents can say or do and update those rules as new risks emerge. Companies like Amdocs, Cerence AI, and Palo Alto Networks already incorporate NeMo Guardrails into their products.
“These runtime protections help safeguard sensitive data and agent actions during execution, ensuring secure and trustworthy operations,” Reber explained.
Building an AI-charged cybersecurity future
NVIDIA’s approach is full-stack. From development tools and runtime protections to hardware-level security and partner integrations, the company is betting big on agentic AI as the foundation for tomorrow’s cyber defenses.
“Every enterprise today must ensure their investments in cybersecurity are incorporating AI to protect the workflows of the future,” Reber emphasized.
As agentic AI systems begin to power everything from digital alerts to physical infrastructure, the stakes are higher than ever. But with tools that think, reason, and act, and with the proper controls in place, cybersecurity may finally be ready to keep up with the speed and complexity of modern threats.