Akin to placing sentries on your network battlements, threat correlation solutions monitor sensor data and then identify and escalate important threats from across your global network. Threat correlation's aims are:
The benefit of threat correlation is that your security response teams are always focused on the topmost priority, making them much more efficient while simultaneously reducing potential risk and corporate liabilities. As well as internal drivers, other forces escalating the demand for threat correlation include new federally mandated security regulations for the financial services and health care industries.https://o1.qnsr.com/log/p.gif?;n=203;c=204660766;s=9477;x=7936;f=201812281312070;u=j;z=TIMESTAMP;a=20392931;e=i Given that focusing on detecting and containing blended, multi-point threats is clearly a good thing, let's review what defines an effective correlation solution and the attributes it displays.
Threat Information Quality
Any information delivered by a correlation technology should be high quality. Put another way, this means that the alerts the correlation sends should be timely and relevant.
The ideal correlation solution is one that is real-time or as close to that ideal as possible. After all, the sooner you find out about an active incursion the sooner it can be dealt with, and at lower costs. Delivering correlation via computational modeling (as opposed to post-mortem data mining) would therefore seem to be the most effective approach.
Relevance is the ability to deliver the appropriate information to the right people at the right time. For example, if the firewall can't be reached because of a network failure, the NOC should get the alert instead of the security team. Similarly, the solution needs to filter out irrelevant noise in order not to swamp operators with false positives. Most importantly, the solution must identify and alert on false negatives -- i.e. detect high-risk threats that might be missed by a manual log survey or a tool that simply looks at a single device. These false negatives -- the compromises that are missed -- represent the enterprise's Achilles heel. An unmanaged risk very quickly (and expensively) turns into an unmanaged liability.
Comparing Correlation Architectures
Correlation implementations need to deliver the components identified by GartnerGroup:
Most solutions on the market today have superficially similar architectures to meet these design goals, as follows:
A closer look, however, reveals considerable philosophical differences in the way these solutions deliver end results. The solution that is right for you depends to a significant degree on how well your organizational needs match the vendor's needs.
Since most organizations have policy and event management consoles in place, let's focus on the first three items. They describe how the threats are identified and the alerts generated, and represent the largest areas of implementation divergence.
Some solutions simply pull the sensor log files into the corporate network (there may be some compression involved to reduce network bandwidth demands) and post them to a central repository. Others typically perform the collection and initial analysis on the device itself, distributing the collection function out to the network edge. One is heavier on network bandwidth and represents a centralizing approach, while per-sensor agents enable distributed solutions at the cost of some resources on the monitored application.
Also known as normalization and aggregation, this phase filters out irrelevant data, focusing on the important threat-related data using rules typically defined by the product and its users. It's here that many of the false positives are eliminated.
Some approaches dispense with filtering altogether, arguing that the only way to ensure correlation is to have all data to hand all the time. This clearly has an impact on storage requirements and reporting workloads, and forces the correlation engine to potentially process data repeatedly that may never have any relevance. Other solutions filter more aggressively and aggregate upfront, ensuring that the correlation technology only has to deal with real, active threats.
Normalization then takes the individual data streams and ensures that they are presented to the correlation technology in a standard form. It becomes significantly easier to compare data from disparate data sources and multi-vendor security solutions this way. Example normalized fields are shown in the table below. Having these fields available enables them to filter, group or correlate the security events by the correlation solution.
Again, there are fundamental differences here between vendors. Some focus on data mining and therefore normalize into a centralized database, while others focus on computational normalization in memory for real-time performance.
The ultimate goal is to pull data streams from multiple security and application platforms, correlate the data and provide timely, relevant and accurate data for threat response teams.
For solutions that rely on a centralized database, the answer is conceptually simple: run the appropriate queries on the database and the answer pops out. In practice, however, data mining approaches for the kinds of data volumes we're discussing present scalability and performance problems. Consider the network and systems management world, where data volumes are similar and the need for efficient real-time responses are vital -- there's a reason why there aren't any database mining solutions to manage network fault storms.
Threat correlation also means more than downstream alarm suppression or having the ability to populate a few forms in order to describe these types of relationships, but rather having the freedom to associate various events with other events across some period of time. Some common correlations routines are:
Frequently, event data from multiple sources and nodes is necessary to identify a problem. The correlation engine needs to be able to process data regardless of its origin.
The current course of action may be influenced by past events. For example, a single port scan by a particular source or network may not be interesting, but comparing that event to short- and long-term histories may unveil a pattern of behavior that requires immediate action.
For example, short bursts of high load network traffic may be normal, but sustained bursts could indicate a denial of service attack is underway. The ability to link event persistence with periods of time is a critical need of a correlation engine.
As part of correlation, various conditions may require interactions with other systems to complete the process. For example, asset database, customer databases, network device or other agent data may be required. The best correlation solutions go beyond simple security data at run time in order to help diagnose, distinguish and deliver meaningful high priority alerts.
Finally, it's important to cut through the vendor-speak and differentiate aggregation -- the consolidation of single events on a single sensor to provide basic escalation -- from true correlation, the ability to analyze, compare and match escalated sensor events from multiple sensors in multiple timeframes. Aggregation is an essential pre-requisite for effective, cross-platform, real-time correlation.
Hackers are Real-Time -- Are You?
Being able to contain threats is a vital part of the security team's mission. Since they can't contain what they can't see, a threat correlation solution is a vital part of the organization's ability to correctly and quickly prioritize its efforts. Don't let senior management kid themselves that simply deploying the next-best sensor delivers security. Fast, real-time response using the correlation architecture that's appropriate for your environment is the only way to contain modern threats.
Phil Hollows is vice president of product marketing for OpenService, Inc., a provider of network security management software.