November marked the sixteenth anniversary of the Morris Internet Worm incident so this is a good opportunity to revisit some of the important lessons that came from it so we don't make the same mistakes again.

The worm caught the Internet of 1988 quite by surprise and resulted in some pretty major changes in how we do things. For instance, it kicked off the Incident Response discipline. This was in large part thanks to the U.S. Department of Defense and its formation of the CERT Coordination Center at Carnegie Mellon University's Software Engineering Institute.

Indeed, in today's computing environments, no information security program is complete without having an Incident Response plan in place. In the public sector, U.S. government agencies are even required by law (FISMA) to do so. On the other hand, there seem to be at least as many definitions of 'Incident Response' as there are plans and teams in place to handle it.

To clear up any confusion, let's take a closer look at what the real critical aspects of Incident Response are.

For starters, one of the most common mistakes that I find in companies' Incident Response plans is that they tend to focus purely on the technical aspects of how to handle an incident. Security incidents are business concerns first and foremost, and need to be dealt with as such.

So, instead of falling into the trap of writing a plan that provides a simple 24x7 call tree and a set of technical procedures for scrubbing affected computers of viruses, worms, and spyware, etc., consider instead the business process that needs to be followed. (That's not to say that the call tree and technical procedures aren't worth documenting. On the contrary, but they should be secondary to clearly codifying the decision and coordination process among the organization's business representatives.)

For example, if your company's Web commerce systems are compromised (or believed to be compromised), who is authorized to decide whether or not to shut down the site while the technical aspects of the incident are handled? That is a huge decision to make, particularly if the site generates significant revenue for your company.

One good practice is to define some levels of incident severity that tie back to business impact and exposure, and then pre-load some of these difficult decisions in a way that empowers or authorizes the Incident Response team to rapidly take needed technical actions.

The severity levels should include such criteria as: potential for loss of life, potential for loss of customer records, potential for unauthorized disclosure of private customer data, and so on. (Note that none of these are technical in nature.) For each severity level, the Incident Response team should have a clear set of actions that it is authorized to take, and/or decision making processes that it must adhere to.

During the planning process, it quickly should become obvious that the Incident Response plan needs senior-most management endorsement and that training and testing of the plan and its participants are well worth the time and effort. Either way, though, planning these non-technical aspects of Incident Response in advance is vital to the overall effectiveness of the response team.

As the Morris worm spread through the Internet in 1988, many sites were quick to disconnect from the Internet, while others stayed online, blissfully unaware of the coming tsunami. In either case, the response decisions, if they were addressed at all, were typically made by the technical staff at the sites. Admittedly, there was little or no commerce on the Internet at the time, but the point is that Incident Response actions really need to be carefully considered in advance.

Another common mistake in Incident Response planning is to neglect to include all of the necessary players. Consider how your Incident Response staff will need to interact with human resources, general counsel, public affairs and law enforcement. Some of these interactions may not be obvious at all, but are likely to become very important during a crisis.

For example, many major incidents have a way of getting media coverage, and often at the least convenient time for the affected sites. Taking a few minutes to brief your company's public affairs representatives on the nature of the incident, and the sensitivities of the situation, along with what to say and what not to say can go a long way to protecting your company's reputation. Consider handing the CEO an index card of talking points in the event that she gets ambushed by a reporter in the lobby.

These fairly subtle 'attention to detail' sorts of things can be what separates a merely adequate Incident Response program from a great one. The common thread should be advance planning of processes and procedures that protect the business in the event of a security crisis. Let the technical details flow from there, not the other way around.

If the Internet had even a fraction of the business on it in 1988 as it does today, you can be sure that the financial losses would have been major. Let's learn from history and ensure that our Incident Response plans are ready for whatever the future holds.

Kenneth van Wyk, a 19-year veteran of IT security, is the prinicpal consultant for KRvW Associates, LLC. The co-author of two security-related books, he has worked at CERT, as well as at the U.S. Department of Defense.