Dave Johnson was in charge of keeping Grant Thornton, a business advisory firm with 50 offices in the U.S. and more than 650 offices in 109 other countries, up and running despite a national disaster that cut off one of the company's four national hubs. And he stood half way across the country watching Manhattan erupt into chaos.
''Our folks in New York were just four blocks from Ground Zero and had to be evacuated from the facility,'' says Johnson, remembering the events of Sept. 11. ''Everyone was upset. Smoke was pouring in. We had lost voice connectivity and data connectivity. Everything went red on our consoles.''
But Grant Thornton didn't shut down. Actually, the company barely missed a beat. The Chicago hub picked up the slack and kept the business running -- almost as usual.
Like millions of others in the United States, Johnson didn't think his computer network would have to stand up to a catastrophic terrorist attack. But unlike a huge percentage of IT managers around the country, Johnson was prepared, and Grant Thornton kept working.
Now a year later, what have all those IT managers learned from the terrorist attacks that stunned the world, crippled some businesses and tripped up the American economy?
Not enough, according to industry security experts.
''Right after Sept. 11, there was an immediate response from IT,'' says Dan Woolley, a vice president at Reston, Va.-based SilentRunner Inc, a network security firm. ''We had a gut reaction. We haven't had a major hit in a while. A lot of people's eye is coming off the ball... We have a tendency, as a culture, to say if nothing has happened, I'm going to take the risk instead of spend the limited number of dollars in my budget.''
Woolley and other security analysts say the attacks raised awareness of the need for companies to have better physical and network security. It increased awareness of the need to screen contractors, business partners, vendors and employees.
But it didn't necessarily lead to a lot of major network security changes.
''We saw a lot of people saying they needed a backup plan,'' explains Woolley. ''Then they realized they didn't really know how to put it together and it was going to be a big effort. Then they thought about PC recovery efforts. Most people didn't know what to backup or where to back it up to. And it was going to cost a huge amount of money. Then they said they would protect themselves from immediate threats, like worms, viruses and hackers. So they spent their money and time on virus protection and intrusion detection.''
And that's not a bad thing, according to Woolley. It's just not the major security changes that CIOs and CSOs had been talking about last fall. Some companies definitely have implemented those changes. But they're not in the majority.
This summer, AT&T did a survey of more than 1,000 U.S. businesses with 100 employees or more. The study showed that 25% of mid-sized and large companies surveyed still don't have a business continuity or disaster recovery plans in place. And of the companies that do have plans, 27% haven't reviewed or evaluated them in the past year and 19% haven't tested their plans in the last five years.
International Data Corp., a Framingham, Mass.-based industry analyst firm, backs up those numbers, stating that 109,000 TB of data are unprotected in enterprises worldwide, and 314 million business PCs are still unprotected around the globe.
And Tom Hickman, engineering operations and quality assurance manager at Framingham, Mass.-based Connected Corp., a PC data protection and management company, says increasing preparedness and security isn't actually an IT issue.
It's a strict business issue.
''There is no technology problem,'' says Hickman. ''There's only business problems. That's how companies have to look at it. It's all about maintaining the pipeline of incoming business. It's ensuring that you're able to function in the even of a natural gas explosion, an earthquake or a wide-scale unspeakable disaster like a terrorist attack.''
For Grant Thornton's Johnson, the attack validated the time, money and effort they spent rebuilding the company's computer architecture several years ago. They went from a decentralized company with every office running its own hodgepodge of PCs, switches, and servers to a centralized network with four major regional hubs and a major centralized data center. And the hubs were built identically. Every day, one hub backs up another, sharing information and preparing to bear the added weight of a sister hub going down.
''We were living day-to-day with a redundant environment,'' says Johnson. ''We live a disaster recovery model every day.''
Johnson says he wonders now what would have happened if the New York hub hadn't been four blocks away from Ground Zero. What if it was right at Ground Zero?
He says he figures that 80% of the information in that hub could have been quickly recovered. The other 20% of the information, whether it was jotted down on Post-It notes stuck to computer monitors or messages left on voicemail, would have been lost. Johnson says he'll be spending the next few years working on lowering that 20% number.
''If you live the plan and use it for circuit outages and the wayward backhoe, then you have a better appreciation for it and you can respond even more effectively,'' he says. ''My dad was a Chicago fireman for 30 years. He never knew what he was going to face every day, but he knew his team was prepared and they had a plan to face the unexpected. That's critical for the future of the U.S. economy.''