Learn How a Virtual Networking Approach Can Strengthen the Security of Federal Networks REGISTER >
I'd even argue that they have already failed us.
No, I'm not saying that we have to abandon everything we've done, but I am saying that we do need to be careful not to rely on reactive solutions as our sole means of defense. I have no doubt that our current reactive practices are resulting in undue expenses and, in turn, erosion of customer confidence.
Let's start by taking a quick look at how we got ourselves into this predicament.
What these product solutions all have in common is a human element: When a new virus, attack, Trojan horse, spyware, etc., is discovered, an engineer at the product vendor analyzes the malware carefully. Once analyzed, the vendor releases a 'signature' of some sort that matches the malware and enables the product to identify it thereafter. Of course, that signature only works if the vendor distributes it to all of its customers and then customers install it across all of their systems.
In today's world, it's a pretty safe bet that the time from analysis to deployment of a signature set is, at a bare minimum, a full business day for products that have the very best signature distribution capabilities.
In a similar vein, our operating system and application product vendors use a reactive-based model for distributing product updates, service packs, and patches. Just because a vendor makes a patch available doesn't mean that the user base of that product is now protected.
We don't need to look any further than the headlines to validate that this process is failing.
Although this approach may have been adequate throughout the 1990s, today's unprecedented levels of connectivity and desktop computing extensibility have made it obsolete and unacceptable. According to recent media reports, new viruses, such as Mydoom and Bagle, are showing up and spreading at rates that we've never seen before.
We can no longer keep up with this pace using reactive solutions. The best we can hope for is to delay the inevitable.
These factors are what have brought me to the conclusion that we can no longer continue this way. We need to move forward to other solutions.
I'm a big believer in layered security approaches, and this is a prime example of when we should be looking to add additional layers to our defenses.
I should also point out that I don't claim to have all the answers. If I did, then I'd most certainly be working on bringing those products and technologies to the market! I will, of course, continue watching the market for product and technology solutions as new and improved versions hit the streets.
We're likely to find some relief by revisiting the fundamental principles of secure computing. I expect that we'll soon see, for example, more effective use of ''separation of privileges'' and ''compartmentalization''.
For example, email clients that can effectively ''sandbox'' attachments so they can't do harm to the user's computer might well help prevent or slow new viruses from spreading. That way, when the user inevitably clicks his mouse on an attachment that contains a new virus for which no signature exists, the virus doesn't hop to all the users friends -- or should I say ex-friends.
The Java Virtual Machine (JVM) is a great example of a sandbox architecture that protects the host computer from software that is run from within a JVM-enabled browser. It does this by starting with a policy for accessing local system resources, such as disk drives and network connections. Any Java applet running in the sandbox is prohibited from accessing resources that the policy file disallows.
Another possible approach to reducing the rapid spread of new malware could lie in better screening of incoming -- and outgoing -- email at the enterprise level.
Instead of continuing to use a blacklist approach that blocks emails containing attachments known to be bad, how about only accepting emails that are at least more likely to be good? That is, block emails containing attachments and require the recipient/sender to vouch for their validity before allowing them to pass. This could be automated to a large degree, but would no doubt result in additional effort.
But wouldn't the additional effort be better than the status quo?
In addition, we need the entire software development community focusing more heavily on issues of software security, from the earliest stages of a product's design through its deployment, operation, and maintenance. The days of customers accepting products that contain easily avoidable flaws, such as buffer overflows, are over.
But that's another column for another day.
Kenneth van Wyk, a 19-year veteran of IT security, is the prinicpal consultant for KRvW Associates, LLC. The co-author of two security-related books, he has worked at CERT, as well as at the U.S. Department of Defense.
To discuss this issue with other IT and security administrators, go to our Forum.