We’ve all suffered from software that was clearly not built with security in mind. Quite often, features meant to help us have ended up hurting us. After all, who would have ever thought a spreadsheet could present an attacker with a means of breaking into a computer halfway around the world, by simply crafting an email message? I’ll tell you who — it’s we security folks.

Case in point: I vividly recall when Microsoft Word gained its Visual Basic based macro scripting language feature. The anti-virus community screamed, begged, and pleaded for that feature to be omitted (or at least significantly hobbled in its capabilities) so it couldn’t be used against us. We now know, of course, they lost that battle and we’ve seen many examples of exactly the sort of attack they foresaw.

Now those of you who know me know my #1 rule of information security: Don’t impede business. It’s not highly likely those anti-virus folks could have thwarted disaster even if they’d been in the meeting room when the macro scripting discussions first took place. However, they would have had a much better chance than by sniping at the products after they’re developed, announced, advertised, and released.

Defensive Development

Many of you know I’m a huge advocate of software security. If you’re not familiar with this term, you should be. Think of it as defensive software development techniques. Sometimes that means adding some security features like encryption; other times it means handling user data input as though it was some deadly toxin.

Sounds good, but software security is not likely to succeed without some value-added input from the security folks who have been out there in the trenches fighting off attacks. We’re the ones who have studied and analyzed these attacks in detail. It’s been my experience that this awareness and technical knowledge base is sorely lacking among our software developers. Although they’re more often than not brilliant people who do excellent work, studying and learning from attacks has never even been on their radar screens.

Further, software security can only succeed with careful and appropriate input from the security folks. We need to be active participants in the software development process, not the post facto reviewers of their work. The latter is how things are done all too often, and it is downright counter-productive. It leads to adversarial relationships between developers and security people when what we need to be building is collaborative relationships striving toward the common goal of producing sufficiently secure software to meet our users’ needs.

This concept has been one of my many “soapboxes” lately. I passionately believe we’re missing opportunities for doing things much better. So, to that end, here are a few suggestions you might want to consider trying with your software development folks:

  • Requirements – Help the development team assess the product requirements as early on as possible. Look for product “features” that can be misused by an attacker to do harm. We’ve all seen those in products; the key is to catch them before the damage is done. And, rather than just saying “no,” try instead to find ways of accomplishing the goal without providing an attacker with a means for attack.

  • Design – Architectural risk analysis is a process of methodically studying a design architecture for weaknesses. Look for weaknesses against known attacks. Look for design ambiguities that are likely to lead to implementation problems (e.g., use of an insufficiently secure random number generator for spitting out session keys). Look for areas of general weakness where an attacker might be able to compromise the application. Once you’ve looked for these things, map them against the business risks and produce a prioritized list of design defects that should be addressed.

  • Implementation – Some IT Security shops are starting to perform automated code reviews of the code coming out of their development shops. Too often, this is done in much the same way as penetration testing – run the tool(s), give the “customer” a checklist of things to fix. Well, code review isn’t that simple. Where you can really provide value is in helping the developers interpret and make decisions on the output of the review tools. Help them put into context the nature of the attack and whether or not it would represent a problem in this application.

  • Testing – Software security testing can be tough to do well. One of the biggest problems is helping the QA folks come up with realistic test scenarios to rigorously test for the risks that were found during the risk analysis. They’re great at developing tests to functional specifications, but it can be really tough to come up with security test scenarios. That’s where you can provide real insight.

  • Deployment and operations – Help ensure the applications are being properly deployed and run in environments that are configured to the security (and functional!) needs of each application.

    Now, I certainly realize this is a lot of stuff and I’ve just barely scratched the surface here. This list is the basis of the software security “touchpoints” defined by Gary McGraw et al in his “Software Security: Building Security In” book, so consider looking there for more detail and tips on how to implement this stuff. You can also turn to the U.S. Department of Homeland Security site, Build Security In, for more detail on each of these things.

    And don’t even consider trying to do all of this at once. Take an iterative approach to adopting the practices above one small step at a time. Most importantly, do everything you can to eliminate the adversarial relationship with your developers. We all suffer in the long run from that.