By practically every measure, cybersecurity threats are growing more numerous and sophisticated each passing day, a state of affairs that doesn’t bode well for an IT industry struggling with a security skills shortage.
In a recent ESG and ISSA survey, 70 percent of cyber security professionals felt the cybersecurity skills gap had an effect on their organization. The Center for Cyber Safety and Education and (ISC)2 predicted a shortfall of 1.8 million cybersecurity professionals by 2022 after quizzing 19,000 security experts.
With less security talent to go around, there’s a growing concern that businesses will lack the expertise to thwart network attacks and prevent data breaches in the years ahead. Fortunately for CISOs, one of today’s hottest technology trends is helping make up for some of the security skills they lack.
Artificial intelligence (AI) is steadily creeping into nearly all facets of IT, including security.
Gartner has predicted that AI will somehow feature in nearly every new software product released by 2020. Major cloud providers, including Amazon Web Services (AWS), Microsoft, Google and IBM, offer developers a growing number of machine-learning services that they can incorporate into their IT solutions.
In 2017, a banner year for cybersecurity funding, startups like ThreatQuotient, Recorded Future and Darktrace raised millions of dollars for security platforms that use AI to strengthen an enterprise’s IT defenses. Even industry veterans like Symantec have jumped on the bandwagon.
As for how the industry is using AI to keep networks, users and their data safe, here are some examples.
AI for threat detection
On average, security analysts review between 10 and 20 critical security incidents each day, according to IBM. A thorough evaluation can take hours, time that attackers can use to gain a stronger foothold on a network.
Meanwhile, there’s a good chance that an organization’s IT personnel spent many of those precious hours focused on false alarms while real dangers linger, awaiting their turn under the microscope.
IBM, whose Watson suite of AI technologies has become the posterchild of intelligent IT systems, believes there a way that machines and humans can work together to find threats faster and more accurately than before.
The company integrated its Watson Discovery Service with its security analytics offering, QRadar Advisor. The result, is a system that helps security analysts uncover sophisticated threats and enables businesses to properly prioritize their remediation efforts.
“IBM QRadar Advisor with Watson combines insights from structured information (from X-Force) and insights from unstructured data (from IBM Watson Discovery Service) to collate millions of individually logged IT events including breach reports and best practice guidelines,” blogged George Mina, program director of IBM Watson for Cyber Security.
“Using its industry knowledge corpus of cybersecurity information, threats that are hidden or go unnoticed by manual investigations are easily uncovered, like finding a need in a haystack, all day, every day,” continued the executive.
AI that shines a light on the dark web
The nefarious-sounding dark web lives up to its name. Top search engines don’t index it and folks must resort to software tools like Tor to access it.
Inside, the dark web’s illicit marketplaces are a cybercriminal’s paradise. Exploits, malware code and stolen data, much of it personally identifiable information that can facilitate identity theft, are all available for the right price.
Although considered a small sliver of the deep web, is it still large enough to overwhelm manual attempts to draw security intelligence from it. That’s where AI comes in.
Baltimore-based cybersecurity startup Terbium Labs uses machine learning techniques in its dark web data monitoring and threat intelligence system, Matchlight. The automated system scours the dark web for evidence of data leaks that may affect a business and its users, generating incident reports the moment it detects employee or customer data, or other forms of sensitive information that companies don’t want to float around in cyberspace.
And there’s a good chance that information will be used for malevolent purposes when it lands in the wrong hands.
In June 2017, Terbium Labs researchers decided to see if the fraud guides that litter the dark web are a waste of time or the real deal. As the term suggests, a fraud guide instructs readers on how to exploit “processes, products, and people for profit,” according to the company.
In its analysis of over 1,000 guides, Terbium Labs found that a whopping 89 percent were actionable, meaning that they serve as roadmaps to potential criminal activity, more often than not. Add a dash of stolen personal information, and these guides can bring its buyers one major step closer to a successful scam and some ill-gotten gains.
AI that unravels stealthy malware
It’s inevitable. Folks visit websites that spew malware using their work PCs or an overworked employee hastily clicks on a link that was seemingly sent by the boss.
In short order, a company’s systems are hit with ransomware, rootkits and other forms of malware.
Signature-based detection used to provide a formidable defense against infections, but the sheer volume and variety of malware that is coursing through the internet nowadays—an estimated 250,00 new malware strains pop up each day—makes it tough for signature-based systems to provide comprehensive protection, particularly against zero-day threats.
Comodo is using machine learning to study the behavior and intent of malicious code, even if it appears benign when it is first encountered.
VirusScope, a component of the company’s Advanced Endpoint Protection (AEP) product, employs neural networks and other AI technologies to monitor a system’s running processes, slamming the brakes on activity that signals an imminent attack.
Unrecognized applications are run within a container that prevents them from accessing other processes and successfully infecting an endpoint. VirusScope can identify escape attempts, and if appropriately configured, alert users of suspicious activity across an entire system.
AI is not a cybersecurity cure-all yet
Although it’s encouraging to think that vigilant, always-on AI sentries can provide 24/7 protection, it’s no reason to throw caution to the wind.
Machine learning still has some ways to go before it can stop each and every hacker, piece of malware or data breach attempt.
In the thick of the 2017 holiday shopping season, Comodo’s security researchers noticed a disturbing uptick in malware activity. During the week of Dec. 6, they detected 17 million malware files, a 33 percent jump from the prior week (13 million).
Buried in this mountain of malware was evidence that attackers were using unconventional methods to not only bypass traditional antivirus solutions, but also AI-powered ones.
“The limitations of machine-based analysis have also emerged. While machines can detect known malware executables and simple unknown ones, they cannot analyze complex unknown malware files, which numbered almost 75,000 last week,” wrote the researchers in a Dec. 16 advisory. “Complex unknown files require expert human analysis.”
Sometimes the simple solutions are the best. To keep these and similar threats at bay, the company recommended using URL filters and personal firewalls on endpoint systems, which despite their comparatively low-tech methods of blocking threats, can still provide effective protection.