Establishing Digital Trust: Don't Sacrifice Security for Convenience
Just a week or two ago, I heard about a security conference where the organizers inadvertently distributed a virus-infected USB stick to the attendees. Of course, everyone was shocked and amazed, and Im sure the organizers put the fire out quickly and professionally.
(I wont bother naming/embarrassing the conference because, lets face it, it could have been any of us, right? There but for the grace of )
When I heard about the infection, I immediatelyok, after I stopped gigglingthought back to similar incidents with tainted commercial software media circa 1990 or so.https://o1.qnsr.com/log/p.gif?;n=203;c=204650394;s=9477;x=7936;f=201801171506010;u=j;z=TIMESTAMP;a=20392931;e=i I asked myself: how could this happen in 2008?
Seriously, how could this happen in 2008? Was it an act of ignorance?
I think not, these guys are truly good, despite the momentary lapse. Was it an act of hubris? I seriously doubt it? Complacency, perhaps? I just dont know, but I think the situation is worth exploring a little deeper, and worth looking beyond this one unfortunate incident.
If youve ever sat through one of the classes I teach, youd probably recognize a Keynote slide I use often. The slide contains an old photo of the Tacoma Narrows Bridge collapsing in November 1940 (Watch the YouTube video here). The caption I use reads, Were really bad at learning from history.
We are really bad. I cant think of a single other discipline that is so gosh darned pathetic at learning from its mistakes. Perhaps its my engineering degree and background. Perhaps its from growing up around the aviation industry, with a retired 747 pilot for a father. But Ive watched other disciplines and seen that those guys study their failures and improve from them.
Isnt that a novel concept?
Think about it for a bit in the context of information security. The world saw a major buffer overflow in a C program on November 2nd, 1988 in the form of the so-called Internet Worm. The buffer overflow was in the Berkeley UNIX finger daemon program, and it and a couple other problems enabled Robert T. Morriss worm program to rapidly spread across the Internet.
Just a few months later, the Communications of the ACM, a highly respected academic journal, published an entire edition devoted to analysis of the worm and its aftermath. I remember working at another university at the time and thinking, this is great stuff; now we all understand buffer overflows and wont see any more of those in our software.
Boy, was I naïve.
In a more modern context, consider cross-site scripting (or XSS) attacks on web applications, SQL injection, or just about any of the OWASP Top-10 list of web application vulnerabilities (see http://www.owasp.org).
We keep making the same mistakes over and over. How can that be?
Are we even too stupid to come up with new mistakes? I sure hope not.
So, to help us get into the right frame of mind, and to stop dwelling on how dumb we all are, lets consider a couple simple but positive steps we can takestarting right nowto prevent the same problems from cropping up time after time.
1.) Study and learn.
Do you give your techies and software developers the time to learn security lessons? Training is a good starting point, but it has to go beyond that. They also have to learn. (Its the difference between transmitting and receiving.)
If you work with web technologies, start by getting them each a copy of OWASPs free WebGoat and WebScarab tools, and have every one of them work through every single exercise in WebGoat. In the training I do, Ive found no more effective learning tool than this excellent piece of free software from OWASP. Do it. No excuses.
2. Use checklists.
Heres the son of a pilot part of me showing up, I suppose, but checklists are vital. For all the repeated, mundane tasks you doperhaps producing USB sticks for conference attendees, or putting together software youre selling to your customerscome up with a simple checklist and follow it.
Make sure youre following a 2-person rule in following the checklist: one person checks and the other person verifies and marks each step completed.
Sounds tedious, doesnt it? Well, I can assure you infecting your customersor conference attendeeswith malware or some other nasty is anything but tedious, at least for the first 24 or 48 hours. After that, it might become tedious when your customers go to your competitors.
So, if its excitement you crave, dont bother with checklists. Dont bother learning from history. None of this stuff is for you. It is mundane. It can be downright boring.
But next time youre reading about your company and your former customers on CNN, perhaps youll start to yearn for the tedium. At the very least, Im willing to bet your customers will prefer it.