Big Data Overwhelms Security Teams

Share it on Twitter  
Share it on Facebook  
Share it on Linked in  

A major contributing factor in many recent data breaches has been the fact that many IT security teams are simply overwhelmed by the volume of data they're handling. During last fall's massive Target breach, for example, the company's intrusion detection software triggered several alerts, but Target's security team wasn't able to respond to them.

"Target's security team neither reacted to the alarms nor allowed the FireEye software to automatically delete the malware in question," the U.S. Senate Committee on Commerce, Science and Transportation noted in a March 2014 report [PDF] on the breach.

Jon Oltsik, senior principal analyst at research firm ESG, said Target's experience is consistent with what he's been seeing in the market, noting that "companies have invested in advanced malware detection but don't know what to prioritize or what actions to take when they receive an alert."

Big Data Overload

According to the results of an ESG survey of 257 enterprise security professionals, 35 percent of respondents said they're challenged by too many false positive alerts and 39 percent are challenged by lack of adequate staffing. Almost one third of organizations said they're challenged by the fact that incident detection involves too many manual processes.

In response to such challenges, research firm Gartner recently predicted that one in four large global companies will have adopted Big Data analytics solutions for at least one security or fraud detection use case by 2016, up from just 8 percent today.

"Big Data analytics enables enterprises to combine and correlate external and internal information to see a bigger picture of threats against their enterprises," Gartner vice president Avivah Litan said. "It is applicable in many security and fraud use cases such as detection of advanced threats, insider threats and account takeover."

And it's applicable to a wide variety of companies. ESG reports that 44 percent of organizations say their current level of data collection and analysis could be classified as Big Data, while another 44 percent believe their security data collection and analysis will be classified as Big Data within the next two years.

Forty-seven percent of all large organizations, ESG says, collect at least 6 terabytes of data for security analysis every month.

Data Integration and Correlation

Mike Lloyd, CTO of RedSeal Networks, said that kind of data provides IT security teams with a serious challenge. "I don't meet any security teams these days that say, 'You know, what I lack is data,'" he said. "In fact, we're drowning in data. The problem is turning that data into facts you can use."

Part of the answer to that problem, Lloyd said, lies in integrating the data you collect. Put network data and endpoint data together, for example, and you'll find correlations you wouldn't have noticed before. "If you can integrate the data and combine it, then you can start to uncover problems that you can't see by only working in one silo," he said.

Brian Girardi, senior director of product architecture at RSA, said initiatives designed to share threat intelligence, such as R-CISC for retail and FS-ISAC for financial services, can be key to maximizing the results companies are able to get from that kind of integration and analysis. "The people that collaborate within those groups get a lot of value," he said.

And Girardi said such collaboration will become much more efficient over time. "I would expect over the next couple of years that some automation will come into the sharing environment," he said.

That'll be just one step of many toward greater automation in enterprise security.

Next Step in Automation

Automation has been a key to IT security for a long time, said RedSeal's Lloyd, starting with automated vulnerability scanning. "That was automating to find -- and it's great. It works really, really well, except it makes mountains and mountains and mountains of data."

And so the next logical step, Lloyd said, is to automate the handling of that data.

Think of a World War II war room, Lloyd said, with a map table in the middle of the room and people on telephones around the edges pulling in information. "That's the vision for the current stage of automation in the security industry. We're trying to integrate to build a war room, to be able to handle all of the data feeds coming from the outside."

The World War II war room was designed to process the information coming in by phone and integrate it into a usable map in the middle of the room. In the modern parallel, though, the bad guys are winning. "They're finding it far too easy to break in, because we're not doing well at building these war rooms," Lloyd said. "We don't understand our own defenses."

The answer, Lloyd said, lies in automating the processes of prioritization and integration. "What they didn't do in World War II was have all these phone calls come in and then just repeat all the information to the decision maker who was trying to plan strategy," he said. "You condense it and you integrate it into the map first."

Keeping Humans in the Security Picture

The next step, ESG's Oltsik said, will be to automate the response to threats. "In other words, we need to turn threat intelligence and analytics into automated actions to remediate attacks and fine-tune security controls," he said. "In this model, new threat intelligence may automatically generate a firewall rule, an IDS/IPS signature or new analytic."

Still, RedSeal's Lloyd suggests it's probably best not to rush in that direction too quickly. "There are people who take either side of that debate, but I'm in the 'we don't want to build Skynet' camp," he said. "I really don't think we're ready for our IT infrastructure to be completely operated automatically."

Even when we get to the point of automating the process of responding to threats, Lloyd said, we'll never take humans out of the loop completely. "Eventually what you do is you close 99 percent of the loop, and you keep a human in just to supervise what's going on," he said.

RSA's Girardi agrees, noting that while automation will become increasingly important over time, machines won't ever do it all. "There will still be a human element, but the goal is to have the human leveraging the technology, adjusting the models, adjusting the controls, as part of that picture," he said.

First Steps in Managing Data Flood

As you work to manage the flood of data being faced by your security team, Lloyd suggests using the war room analogy to get a handle on what you need to do.

"Think about how you're going to take in those feeds from the outside and how you can process them, but don't assume that's enough," he said. "Look at how you can integrate your data feeds together, combining problems to solve them."

ESG's Oltsik said the first step in doing so should be to document your internal requirements, goals and metrics to determine what kinds of solutions you'll need. He also suggests starting by experimenting with open source tools like PacketPig. "The fundamental guideline here is 'start small and grow,'" he said.

In a recent blog post, Oltsik recommended judging Big Data security analytics tools on their ability to do the following:

  • Accelerate incident detection
  • Improve staff efficiency
  • Reduce false positives
  • Automate manual processes
  • Supersede point tools

And really consider what your workforce is able to handle. "The skills shortage is a big factor here," Oltsik said. "Some organizations don't have the right staff or skills for Big Data security analytics. These firms will likely look for SaaS vendors like Arbor Networks, SumoLogic or Dell SecureWorks."

Whatever you do, Oltsik said, it's crucial to understand that the way we've handled security in the past is no longer working.

"We need to make more intelligent decisions about risk management and accelerate our incident detection and response capabilities," he said. "The only way we can do this is to collect, process, analyze and act upon more security data and threat intelligence in a much more efficient and effective way."

Jeff Goldman is a freelance journalist based in Los Angeles. He can be reached at jeff@jeffgoldman.com.e