Exploring the state of enterprise security management

Share it on Twitter  
Share it on Facebook  
Share it on Linked in  

Investments in enterprise security management (ESM) tools have been on the rise in the last few years due to an increasing awareness by organizations that they need to better leverage the investments they've made in a variety of information security point products. How are those products working? How do you know if your company is being attacked? How can volumes of log files be monitored in real time so that an attack can be met with a proactive response?

Information security consultant Joe Judge explores the business and technological state of enterprise security management in a discussion with Reed Harrison, an information security veteran of 17 years. Reed served as director of information security for more than eight years at Melbourne, Fla.-based Harris Corp., where he built and deployed global security processes and standards, and established enterprise-level security controls. He is co-founder and chief technology officer at Rockledge, Fla.-based e-Security, Inc., which makes a centralized security console.

Q. In 1999, you co-founded e-Security, Inc., and started preaching about the need for a more comprehensive and holistic view toward enterprise security. Two years later, enterprise security management (ESM) is seen as a more vibrant space - necessary and critical to an organization - by information security officers, analysts and the press. Does this surprise you?

A. No. In the last seven to eight years, with the advent of client/server computing and LAN connectivity, the explosion of home offices within employee bases and the point of controls for security managers to deploy and manage have exploded exponentially. The amount of information created by all of these sensors (such as firewalls, intrusion detection systems, anti-virus and VPN [virtual private networks], for example) and application-level controls has created information overload for security organizations.

Individual firewalls in large enterprises can generate gigabytes of log records per day. Correspond that with intrusion detection products that are constantly detecting valid and invalid intrusive activities; add to that each operation system, logging every user access; application violations across the enterprise and you have the problem of large volumes of security data spread across a very distributed environment. It rapidly becomes obvious that security officers need management tools to make sense of it all.

Q. So, how are security officers really monitoring their heterogeneous systems today?

A. Security vendors are working hard to create products that manage the output of their own security solutions. So whenever available, security officers use monitoring consoles provided by different vendors. When monitoring consoles are not available for certain systems, advanced security organizations create their own security tools to parse and integrate security event data. Most often that turns into many consoles for many different products.

There are several limitations to this approach:

  • The different vendor consoles do not interoperate, resulting in many consoles that require human resources to monitor.
  • While limited integration is achieved, there is still no centralized and unified view of the entire security infrastructure. Enterprise-wide reporting is still not possible.
  • For homegrown solutions, another limitation is the fact that the security organization is spending time developing scripts and databases instead of promoting security within the organization. Moving forward, the maintenance and expansion of these solutions can prove to be very expensive.
  • This also is true for many managed security service providers (MSSPs), which spend heavily on resources to develop their own solutions while their primary value should reside in providing value-added security services to their external customers.
  • And finally, the expertise necessary to support this kind of infrastructure is limited and expensive.

Q. What kind of vendor-supported technologies have emerged in the ESM space to address these interoperability problems?

A. A series of log management and forensics tool vendors have emerged to address the problem of consolidating all the logs from various devices into one security repository for storage, analysis and reporting. While this approach is desirable for "after the fact" forensics and analysis purposes, the acquisition and storage of this huge amount of data, the impact on the network performance and the cost of the server and storage devices makes it a prohibitive model to scale.

In addition, any possible alerting mechanism is prompted after analysis of the entire database with a dangerous time gap between the moment an event actually happens and the time it is reported; for example, a 3 a.m. page on an incident that happened at 5 p.m. the day before. From an operational point of view, such an approach is unreasonable to most organizations that do not have a requirement for forensics analysis.

While this seemed to be a logical expansion for the management of certain security solutions, what most security organizations need today is an immediate indication of relevant security activities as detected by the security products already deployed. The real measure for security today is the time it takes to detect the incidents plus the time it takes to respond. With the advent of the Internet, the time it takes to attack an organization and the corresponding time it takes to respond has been reduced dramatically. Therefore, operational efficiency is key to security officers. As a consequence, a new set of real-time security monitoring solutions has emerged that involves the integration of heterogeneous security events in real-time to a central monitoring console. That's why we developed e-Security software.

Q. Can you expand a bit more on both relevancy and immediacy in managing today's distributed information security environments?

A. "Relevant" means, for example, looking only at the security logs, not the entire system logs. On a server, only one out of three logged events may be security related. Relevant also means that heterogeneous events are being measured equally, so that on-the-fly correlation of security activities can be performed. This can only be achieved through data normalization, by decomposing security messages into unique data points like source IP and destination IP and the association of these data points to a pattern of activity.

The benefit of "immediacy" is the timely notification of potential security exposure with sufficient information to prompt an efficient response. "Immediate" implies continuous on-line auditing of security devices that support data normalization and correlation activities through automation. Immediacy is a significant differentiator when comparing other log management and forensics tools.

As a security officer I need a timely indication of security breaches anywhere in the enterprise before doing any type of forensic analysis. This also prioritizes my investment. It is less costly for my infrastructure to send in relevant alerts from security devices to my console than it is to transfer all the records from everywhere. One of our customers told us he didn't want to move haystacks from one location to another only to end up with two or more haystacks. What he wanted was to extract the needles from the haystacks and leave the haystacks in the field.

Once I have optimized my operations and know what is happening on a continuous basis, I can start looking at certain incidents in more detail. This is where I may opt to look at data provided by forensic solutions.

Q. Is that the end of the game? What else tops the mind of today's security officers in the quest for more efficient incident response?

A. Security officers are looking for answers to three questions:

  • Am I being attacked right now?
  • Am I vulnerable to that specific attack?
  • Is this system critical?

First, most corporations use security products such as firewalls and intrusion detection systems to detect potential attacks on their systems. Through the use of a real-time monitoring system that both centralizes and normalizes this alert information, knowledge of these attacks can be communicated effectively and efficiently to the security operations team.

Second, many corporations use vulnerability scanning tools on their important systems; these tools are run on a periodic basis to assess a system's compliance or noncompliance to a secure configuration.

Third, organizations routinely conduct risk assessments to determine which systems are most important to their business. However, this data typically is bound in files and not available on-line to be measured against the real-time activity or output of the latest vulnerability scan.

Q. If these three things are being done, how do they relate?

A. The most efficient incident response requires a combination of these information types - evidence of attack, vulnerability posture to attack and system criticality. Through the use of a closely integrated system with an open architecture, these three functions can be closely tied together to provide maximum benefit for the security organization: tell me when I am potentially being attacked, tell me which systems being attacked are vulnerable and, of those systems, tell me which ones are the most critical to my business.

Q. What would be a real-world example of what you're describing?

A. Suppose I have 300 servers in my business, being used for everything from customer data to testing. Let's say I have real-time monitoring on each of the network segments, and over the past month all of the servers were scanned for vulnerabilities. As part of my corporate policy and as a result of a previous audit, I have identified the 20 servers that are the most critical to my business because they contain B2B data or customer records.

Throughout the day, I may be notified on hundreds of security-related events for those servers that may require the review of the security operations group. As a security officer, one of my first questions is "where do I start" when it comes to going through those events.

If my real-time monitoring system correlates that there is a series of attacks - for example, a script to attempt to overflow a buffer to gain administrator access - from the same source IP, this immediately raises a red flag. As a result of the security framework I just outlined, in addition to knowing that a coordinated attack is occurring, the security operations team would also know that of those attacks, two are being directed at critical business servers and one of those servers is actually vulnerable to the attack.

My response to the attacks then changes from being one of "start at the top of the list and work down," to "immediately address the vulnerable, business-critical system that is being attacked." As a result, the time the company is exposed to the attack can be significantly reduced, and potential damages and loss of data could be avoided.

Joe Judge is a leading information security consultant based in Boston, where he provides consulting services for several Fortune 500 companies. He previously worked in the Technology Risk Services practice at PricewaterhouseCoopers LLC and managed Fidelity Investments' Firewall Architecture Team. He can be reached at joe@intrusion.org.

More information on e-Security, Inc. can be found at www.esecurityinc.com.

Submit a Comment

Loading Comments...