Artificial intelligence is already redefining cybersecurity, exposing sophisticated attacks and adding a level of Terminator-style relentlessness to threat detection tools and anti-malware software. AI is even being used by a startup to scour the dark web for evidence that its customers have been hacked and their sensitive data is being peddled on illicit marketplaces.
But what does the future hold for AI in cybersecurity?
eSecurity Planet asked industry experts how the technology will be used to help enterprise IT and security teams shore up their defenses, thwart attackers and keep a tight lid on valuable data.
Here’s what they had to say.
AI teaches self-defense, cures industry-specific pain points
“The future will see self-healing and self-defending networks, which can leverage AI to take steps to fight and defend the network,” said Tom McAndrew, COO of Coalfire, a provider of cyber risk management and compliance services.
The best use of AI will come in community clouds, where similar challenges and threats are experienced, he added. “Healthcare and financial services are two big areas where AI will bring huge leaps.”
Augmenting security staff with AI
The bad news: there’s a dire cybersecurity skills gap, stretching IT security teams to their limit. The good news: AI is coming to the rescue, said Jacob Sendowski, senior product manager at automated threat management specialist Vectra.
“The task facing security teams, even large teams in well-funded security programs, is herculean. There is a scarcity of qualified security analysts, and the current education and career pipeline has nowhere near the amount of people necessary to meet the current and projected need. Managed service providers will help some, but they are subject to the same talent shortage,” Sendowski said.
Combining human intelligence with AI security tools will help IT organizations cut through the noise and focus their energies on high-value activities.
“AI-enabled security solutions will become integral components of the security team as they can birddog high-threat hosts to a human analyst team,” added Sendowski. “Skilled human analysts will be critical in the incident response process, reviewing evidence and directing an investigation based on the indications that AI tools provide.”
Supercharging the SOC
Distractions mount and attention spans wane, even for the most disciplined security experts as the workday drags on. AI isn’t affected by these human foibles and will therefore help prevent potential security issues from slipping through the cracks.
“We will continue to see artificial intelligence deployed in the security operations center (SOC). Most SOC jobs are checklist-driven, particularly for first- and second-tier analysts who review logs for indicators of compromise (IoCs),” said Kayne McGladrey, an IEEE member and director of information security services at cybersecurity consultancy Integral Partners.
“This is challenging in a retail environment due to the combination of low margins and a tight labor market, as companies struggle to train and retain analysts for this dull but necessary role,” continued McGladrey. It’s a big concern, particularly in light of a recently-patched point-of-sale vulnerability like the one found by ERPScan researchers that affects over 300,000 Oracle MICROS terminals.
“The promise of an AI SOC analyst is that it will not get bored and skip a step in a checklist, missing an IoC. Companies can then pivot from the current struggle of train and retain to allow analysts to apply human judgment and experience to current and emerging threats,” McGladrey said.
Spy vs. AI
Deceptive technology is emerging as a critical defense strategy for businesses that don’t mind engaging in a little cloak-and-dagger behavior to observe an attacker’s behavior without tipping them off. AI is particularly suited for this type of cyber-espionage.
“These solutions deploy decoy virtual machines simulating the client’s actual computers, but overlay sophisticated analytics,” McGladrey said. “When a third-party attacker is lured into interacting with a decoy, the AI can work backwards to find the initial compromise, and alert a human analyst to make a judgment call for when to end the third-party attacker’s connection.” This will allow threat hunters to gain real-time visibility into the tools and techniques used by their adversaries without risking a larger compromise.
Securing the cloud
Cloud computing has reshaped how businesses deploy, deliver and invest in IT services, mostly for the better. One of the downsides of migrating workloads to the cloud is how it can complicate IT security.
“AI is absolutely critical to the security of today’s cloud-based IT environments. AI can be the power behind the automation of security processes that enables security teams to keep up with the velocity and scale of what’s being deployed in the cloud,” said Sanjay Kalra, co-founder and chief product officer at Lacework, a Mountain View, Calif. cloud security vendor.
“For example, a cloud environment made of thousands of transient containers might generate billions of events per hour. If a breach occurs, then somewhere in these events, there will be anomalies, i.e. abnormal activities deviating from the normal behavior of your cloud that the attack triggered by intruding in your environment,” Kalra added.
Even the most talented security professionals can’t keep up under these conditions.
“No manual process will be able to single out these anomalies,” said Kalra. “AI can automatically detect suspicious behaviors much faster and with much more accuracy than a manual process.”
The future will see self-healing and self-defending networks, which can leverage AI to take steps to defend the network.
Beware of ‘unexpected outcomes’
Look before leaping, advocates Steve Durbin, managing director of the Information Security Forum.
He warns that “the use of increasingly mature AI solutions in automated systems will produce outcomes that go beyond the expectations and understanding of IT managers, developers, security pros and system managers.”
Organizations that embrace AI solutions without a firm grasp of their inner workings risk creating a black-box situation. Everything may seem to be working properly, but left unchecked, AI can introduce unknown, and unknowable, problems down the line.
Examples of unexpected outcomes includes compromised decision making due to wrong or incomplete information and introducing vulnerabilities via insecure external networks. AI can also misinterpret commands, a problem that Alexa, Google Assistant, Siri and Cortana users are intimately familiar with.
“To prevent these unexpected outcomes from creating new vulnerabilities, business and security leaders must give full scrutiny and consideration to information security requirements and take steps to ensure the content and accuracy of the data feeds from which AI systems learn, conducting pilots to understand how systems react to inputs before scaling to a full deployment and putting into place contingency plans should AI systems fail,” Durbin advised.
“With so many factors beyond direct business controls, security leaders should prepare to address these threats through considered risk assessments; open and honest negotiations with communications providers; legal counsel to understand the effects of new regulations; and building a sufficiently skilled workforce to oversee the technology,” concluded Durbin.