You've seen the reports. You've experienced the attacks. Security spending is up, yet security breaches are on the rise. There clearly is something wrong with the way we allocate our security dollars.

Firewalls, VPNs, routers, intrusion detection, anti-virus, vulnerability assessment -- no one denies that these tools are important components of a modern network security arsenal. But these new layers of security are ineffective without intelligent management infrastructure and processes that can glue distinct tools together into a coordinated security system.

While a device may prevent a class of denial of service (DoS) attacks, for example, it cannot preempt or predict new attacks. And since threats are constantly evolving, this is a stopgap measure at best. Likewise, it may be hours or days before the configuration changes recommended by an event correlation system are actually applied on the network devices.

How, then, do we go about correcting this situation?

First, we must change the way we think about communications and security, linking the two, rather than treating one as mission-critical and the other as an afterthought. Second, we must find ways to map business requirements to network security policies and ensure compliance with these policies.

While patching is crucial, exposure reduction requires more than regular patching.

Consider the recent DoS vulnerability that was discovered on Cisco switches and routers running IOS. The vulnerability was announced by Cisco, and shortly after relayed by CERT on July 16, 2003. The flaw causes unprotected Cisco devices running IOS to stop processing inbound packets on ports 53, 55, 77, and 103. Two days after the Cisco announcement, on July 18, an exploit was published on an open mailing list, providing instructions to malicious users on how to easily spread the packet sequence that would create a DoS attack and disrupt the operation of unprotected IOS devices.

Several businesses were impacted by this vulnerability. Of the reported incidents, some were due to direct attacks, while others were caused by network managers trying to fix the problem. The hasty patching of hundreds of devices at one time created more problems than the original vulnerability itself.

In a large enterprise, a distributed, often multi-vendor environment creates a management nightmare. Each device must be reconfigured manually and different types of devices often feature different management interfaces. In a distributed network, by the time a corporation has learned about an exploit, hackers will have a huge window of opportunity before that company's many devices are reconfigured. And given that reconfiguration is not only a labor-intensive process, but also a tedious and error-prone one, the determined hacker can be fairly sure that the exploit will continue to exist even after patches are applied, albeit on a much smaller scale.

In the case of the Cisco vulnerability, a policy-based system would have headed off the vulnerability well before it was ever a target for exploits. If your enterprise is running switches and routers with ports 53, 55, 77, and 103 open, then your security policy was faulty. Why? Because those ports shouldn't have been open to exploit in the first place.

Having these ports open is no different than the person who leaves his keys in the ignition of his car, yet is surprised when the car is stolen. The ports in question here very rarely, if ever, need to be used, and therefore have no reason to be opened. The main reason they are open on a vast majority of devices today, is that most businesses follow the conventional security model that values communications over security.

The conventional security model says that all traffic that is not explicitly denied is automatically authorized. But why does the conventional model function in this insecure manner? Because it is so very difficult to manually configure the many disparate devices on the network. This leads network security professionals to leave everything open by default, only denying access to ports that are notoriously vulnerable and listed as such in security books and device configurations manuals. Until July 16, 2003, ports 53, 55, and 57 were not part of that list.

As long as companies use this approach to security, they will be vulnerable to new attacks. This particularly applies to companies with large, distributed, heterogeneous networks. On the other hand, companies with security policy management software in place will be alerted to these vulnerabilities. They will have an automated means for correcting them, while also receiving automatic reports that verify that the corrections have occurred.

Instead of leaving ports open by default, a security-conscious IT administrator should instead assume that by default everything that is not explicitly authorized is automatically denied. Every authorization that is defined is therefore a calculated risk that leaves little space to the unknown. This approach would protect companies from new threats, such as the IOS vulnerability, the SQL Slammer, and other vulnerabilities that use exotic ports that have no business being used in the first place.

Critics of this approach will argue that this creates a burden on network administrators, forcing them to define many different types of permissible communications. This may seem like a daunting task, especially when different business units within the company may well have different requirements. However, when compared to the task of recovering from an attack, the burden of defining permissible types and classes of traffic is mitigated.

At the very least, this process does not bring your business to a grinding halt, the way an attack might. And, in the long run, it could very well keep your business up and running when new attacks emerge.

Since each class of device -- from router to switch to firewall -- requires a different management interface, addressing this problem through a management abstraction layer that resides above these many devices may be the ideal. This would take the configuration and policy burden away from already overworked network security administrators and apply it to each and every network device in a logical and consistent fashion.

Security policy management gives IT managers greater control over their networks. If a Fortune 500 company realizes that Instant Messaging is posing a security risk, that company would have the ability to disallow that form of communication. With traditional networks, shutting down IM traffic is a nearly impossible task. How do you execute this order. And once the order is sent, how do you verify compliance?

In a policy-based system, this rule is readily available and the implementation is seamless. Security policy management software governs this process, from the creation of the policy through the execution of the change in configuration to the reporting that verifies that the policy is in place.

And the process is the same whether you have one device or hundreds of them from different brands.

Security policy management enables network administrators to be proactive, ensuring that best-practice security rules are enforced without human intervention on each device across the global enterprise.

Networks are constantly changing to meet evolving business needs, and only a policy-based, process-driven approach to security will ensure that changing networks remain secure networks.

Anything less puts your company, your customers, and your business partners at risk.

Gilles Samoun is CEO and chairman of Solsoft, Inc., a developer of security policy management software, based in Mountain View, Ca. Samoun also is the founder of Qualys Technologies. Write him at gsamoun@solsoft.com.