I'm going to guess that you've probably heard a firewall engineer's attempt at pith that sounded something like, ''Deny everything that isn't explicitly permitted.'' This is the conventional wisdom approach to configuring firewalls, and it (by and large) replaced the ''allow everything that isn't explicitly denied'' mindset.

These two different ways of viewing network traffic are sometimes referred to as whitelists and blacklists, respectively. What's more, we can consider applying similar approaches in many different areas in Information Security. They both have useful applications, depending on the context of the situation at hand.

Let's explore a bit

The ''deny everything except...'' approach is particularly useful in tightly managed and configured environments (think production data centers) in which we really know and understand everything that the servers need to do their jobs. The advantages here should be pretty obvious -- if something isn't absolutely necessary to the business function that the servers are running, disallow it.

That should span far more than just network configuration. It should include operating system configuration, application integration, and even the application software itself. I've seen a lot of situations where an intruder managed to break a site's perimeter security and then use their own tools to dig deeper into the victims networks. Nothing good comes of that scenario...

Conversely, the ''allow everything except...'' approach is more commonly used in desktop environments where you want to afford your users the greatest flexibility in what they can do with their systems. The big danger in it is in the statement ''explicitly denied''. It implies you have an accurate and up-to-date view of things that need to be denied. That, in turn, implies you actively maintain that list of things. And the bottom line is that missing any bad things can result in the systems being compromised.

Sound familiar? It should.

That's how a lot of our IT security products function. Think anti-virus products that get nightly updates from their respective product vendors. Think intrusion detection systems that require signatures to recognize the latest attack that's been posted to the Internet. This approach is almost always used in configuring -- and I'm really using that term lightly -- file access control of our desktop operating systems.

It should be obvious that there's a great deal of danger in using the ''allow everything'' approach, and it really should only be used when all other options have been exhausted. For starters, as I indicated, it's prone to error. Miss one bad thing and your house of cards can come crashing down. Why do you think IT Security suffers whenever a new virus or worm hits the net before our anti-virus and IDS vendors can develop signatures for it?

It's also closely related to the torrents of false positives that we see in many IT security products. IDS sensors, for example, diligently watch over a network or host for everything that they know to be bad. Whenever they see something that matches one of their signatures, they trigger alerts. What happens, more often than not, in our IDS and firewall monitoring centers when products deliver false alarm storms? Yup, they either get removed or they get ''tuned'' to the point that they don't alert on much of anything of importance.

And that brings me to my central theme today -- the elusive set known as ''anything of importance''. By taking these off-the-shelf products and watching for everything that is known to be bad, even if we tune them to reduce the false alarm storms, we're still missing out on things that should be of real importance to the security of our applications.

That's because so many of these products are, in essence, executing an ''allow all except...'' policy.

What's worse, the set of things that they define as being explicitly denied, or, in the case of an IDS, explicitly alerted on, is a generic set of bad things that have been observed on the Internet, but may have little or nothing to do with the set of bad things that can happen to your software.

I'll say it again because it is a key point here, the set of things that many of these products watch for may have little or nothing to do with the set of bad things that can happen to your software.

So, you ask, how do I better determine the set of bad things that can happen to my software? A fair question, although I'd still prefer to see defined the set of good things that your software must do to execute its business mission.

But if you simply must take the ''allow everything except...'' approach, then your set of bad things that you define should be unique to your software. The only way that I know of getting there from here is to talk to the software developers and/or integrators, because they're not likely to volunteer that information without being asked.

For example, your file access controls and audit alerting should be fine-tuned to the files that are relevant to your software. Turn up the access controls using a ''deny everything except...'' mind set (e.g., allow the application process to read, but not alter, the config file where you store start-up parameters like its home directory). Turn on event logging on individual files, registry keys, etc., that are of great importance to your software. (If the application process ever tries to alter that config file, even if the access fails, someone should be paged -- even at 3 a.m.)

In my view, this practice is the intersection of operations security and software security, and it's something that I see far too little of in production data centers. Don't get burned by an ''allow everything'' attitude!