“You are not a special snowflake.”

This is how Dr. Gary McGraw, author of Software Security: Building Security In, Exploiting Online Games: Cheating Massively Distributed Systems, and CTO of the software security company Cigital distills the findings from his Building Security In Maturity Model (BSIMM) and recently launched BSIMM2 projects. Quick translation: the measurement of whether or not the software meets quantifiable security levels is applicable to all software, regardless of what unique vertical, industry, or purpose it was written for. Although each firm’s process is unique, the measurement of a software security initiative is not.

Measurements are what we use to determine how well we’re doing and gauge improvement (or decline) over time. Measurements are particularly helpful when assessing the relative effectiveness of different methods. Consider a runner who wants to increase her one-mile speed record. A quick read through of various runners’ magazines and Websites brings up a variety of methodologies that athletes could employ like: splits and sprints, ultra-long distance training, and high-altitude hill running.  Yet the ability to measure the effectiveness stays the same. One method may work better, or worse, for a specific runner; but it is the clocking of the stop-watch that is quantifiable when assessing improvement no matter which method of training the runner chooses.

BSIMM is one way organizations can find a “stop-watch” for software security assessment. It is a measurement tool that organizations and companies can use to understand how a piece of software compares to relative security levels of software written in-house, by other organizations in the same vertical, and against the entire portfolio of software applications being deployed by organizations that have chosen to take part in the BSIMM project.

Model not methodology

With the plethora of software development methodologies available, it’s not surprising that some people mistake BSIMM for yet another methodology among the many. But BSIMM is not a software development lifecycle methodology like Cigital’s Touchpoints, The Microsoft SDL, IBM Rational or OWASP’s  Comprehensive, Lightweight Application Security Process (CLASP). BSIMM is a descriptive model that observes data associated with software development and security - and records that information in as objective a manner as possible.

Being objective with the model and reporting observations means staying away from a lot of the judgment calls and “Father Knows Best”-type prescription found in many best practices documents. That doesn’t mean organizations don’t need to consider and define their own best practices or recommended procedures as part of their overall software development program. In fact, having internal and external prescriptive guidance is a necessary aspect of software compliance described as one of the 109 activities in the BSIMM2 model itself.

BSIMM complements prescriptive guidance. It is the data-driven, observational “stop-watch” that can help companies understand if their “time” (or in software security parlance their risk management and exposure windows), is improving or declining and how they stack up against their peers. The authors and participants of BSIMM observe and report on the characteristics common within organizations that have successfully deployed software initiatives and use those data as the basis for measurability and comparative assessment. To date the BSIMM2 project has aggregated software security measurement data from over 30 software security initiatives with over ten more underway. The over 30 organizations providing the data include: .Adobe, Aon, Bank of America, Capital One, The Depository Trust & Clearing Corporation (DTCC), EMC, Google, Intel, Intuit, Microsoft, Nokia, QUALCOMM, Sallie Mae, Standard Life, SWIFT, Symantec, Telecom Italia, Thomson Reuters, VMware, and Wells Fargo.

If we dive a little deeper into BSIMM’s measurements, the authors have divided measurement points regarding 109 activities into 12 core practices:

1.    Strategy and Metrics

2.    Compliance and Policy

3.    Training

4.    Attack Models

5.    Security Features and Designs

6.    Standards and Requirements

7.    Architecture Analysis

8.    Code Review

9.    Security Testing

10.  Penetration Testing

11.  Software Environment

12.  Vulnerability Mgmt and Change Management

Using these practice areas as measurement aggregators, the authors can graph average maturity levels as shown in the spider graph below:

BSIMM2spider.png

Source: BSIMM2, May 2010

In addition to graphing (or “timing”) maturity by practice, BSIMM2 also provides 109 maturity timers, or software measurements, for clock points like: identification gates, known PII obligations, and awareness training.

Back to the “special snowflake” analogy, one of the most interesting observations by the BSIMM authors is that SDLC milestones and clock points are similar across verticals.

The cross-vertical consistency of the milestones shows that a measurement tool like BSIMM can be effective regardless of organizational size or the software application’s purpose. But software security consistency cross-enterprise and purpose may sound shocking to some professionals. Don’t the requirements and the approach change based on business unit, regulatory oversight, and purpose? Well, yes and no. Some of the requirements may change, but some, as shown by real BSIMM data, may not.

Consider language itself. Writing a technically oriented article for an online security site is different from writing the script for Stephen Colbert’s “The Word” segment. But the basic rules of English grammar apply in both instances, otherwise neither this article nor Colbert’s jokes would make sense. Or to put it another way, there’s isn’t a Java for Healthcare, a Java for Financial Services, and a Retail-Only Java. While there is quite a lot of flexibility within the language that allows Java developers to write many disparate software applications, there are some hard and fast rules about writing Java code that don’t change no matter who or why the application is being written.

Key lessons

What, then, does BSIMM2 tell us about what successful organizations are doing well in the software security and deployment space? First and foremost, every one of the successful organizations has a software security group (SSG) in place. This group doesn’t do double duty, time-slicing between security and development; it is a dedicated team that, for BSIMMers has been measured at 1% of the total development team size. However, the similarities of SSGs only go so far – while some SSGs are centralized others are highly distributed. Some work closely on policy and strategy, while others focus more effort on penetration testing and code review. While it’s easy to get caught up in the differences, the critical point is that having an SSG was a universal check point for successful software security. For more information, McGraw expands on the concept of the SSG and the benefits of having one in the article You Really Need an SSG.

Another key take-away from BSIMM2 are the maturity times (or levels) reported within the 12 practice areas. By reviewing the maturity levels, organizations can assess where they are with their own software security maturity and determine how balanced their approach is relative to others. For example, in the area of Code Review (CR) there are 3 maturity levels which are paraphrased below:

1.    Does code review

2.    Enforces standards through mandatory automated code review

3.    Automated code review with customized (tailored) rules

Each level has sub-activities or milestones that clarify what occurs at BSIMM participant organizations, at each maturity level. When using the BSIMM2 document, keep in mind that activities at one maturity level may still be appropriate at higher levels. In code review level 1, automated and manual review are noted. Manual review does not appear in the higher maturity levels, but is definitely a critical part of the code review process even though the organization has advanced to level 3.

Summary

Although each software development has elements of uniqueness, there are also useful commonalities that apply to all software projects. BSIMM2 provides the data-driven measurements of these commonalities that allow organizations to objectively assess (or time) their software security maturity level against their peers and themselves. The model and research also provide useful insights into specific tasks and milestones that organizations can implement to improve overall maturity. Is BSIMM a replacement for everything else that’s being done in your SDLC? Absolutely not! But it is an excellent way to measure where you are and if you’re improving.

Diana Kelley is a partner at SecurityCurve and frequent contributor to eSecurityPlanet.com.