Software Bugs: To Disclose or Not to Disclose

SHARE
Share it on Twitter  
Share it on Facebook  
Share it on Google+
Share it on Linked in  
Email  
It's the age-old battle of security: to disclose or not to disclosesoftware defects.

The proverbial pendulum of opinion has been swinging back and forth onthis issue for decades, and it's not likely to stop any time soon. Theissue reappeared just recently when an ISS employee was prohibited fromspeaking at a conference on the topic of a security vulnerability inCisco's IOS operating system.

Here's my take on it...

First off, it's not a simple yes or no issue. There are different shadesof gray here, folks. At the two ends of the extreme we have no disclosureand 'spontaneous disclosure'. Neither of these are even worthy of seriousconsideration in any practical sense, since neither produce any sort ofpositive results.

The litmus test of positive results that I've always used over the yearsis this: Does publicizing the details of this vulnerability make theproblem smaller or bigger? We're talking big picture now.

It's been my experience that not releasing information on a vulnerabilityinvariably results in a larger problem than the one we started with.This is primarily because the people who need to know about thevulnerability most -- the end users and system administrators -- aren'tarmed with the appropriate information to make informed decisions on whenand how to update their systems.

In my book, that is unconscionable.

At the other extreme, it's also been my experience that spontaneousdisclosure -- releasing everything about a vulnerability the moment or sothat it's discovered -- also results in a larger problem than the one westarted with. The end users and system administrators often don't havepractical solutions or workarounds (for instance, turning off email isnot an acceptable business solution in almost every case). Similarly, theproduct vendors are forced to slap together a quick patch that may or maynot address the root cause (no pun intended) of the problem. We'll delveinto this further in a moment...

So, both of these options are non-starters. If we accept these arguments,then it becomes a question of how we release information and whatinformation we release. That's where my opinion differs from that of alot of the practitioners out there.

There are a few published and ad hoc processes for responsible disclosureof vulnerability information. My biggest gripe with them is that theydon't take into account the software engineering that needs to take placeat the vendor level to appropriately address the problem. In particular,most call for a static, predetermined time period between notifying theproduct vendor and the public release of information about thevulnerability. That model is horribly flawed.

Setting a Deadline

My rationale is as follows. At the top-most level, software securitydefects fall into two general categories: design flaws and implementationbugs. It's the implementation bugs that we hear the most about in popularliterature. They include buffer overflows, SQL insertion, cross-sitescripting, and the like. The most common cause of these problems isinadequate filtering of user data inputs.

Many, but not all, implementation bugs can be fixed quite simply andeasily. A poorly constructed string manipulation function in C, forexample, can be made secure in just a line or two of remedial coding.

On the other hand, design flaws can be much more pernicious. The fix to adesign flaw by its very definition requires the developer to change theapplication's design. A design change can have far-reachingramifications. Think basic software engineering principles here.

To responsibly make a security change to an application's design requiresthe same degree of engineering scrutiny, testing, etc., that goes intodesigning the application in the first place, lest even nastier flaws(and perhaps even implementation bugs) appear as a result.

It all comes down to this... Some software defects can be fixed quicklyand easily, while others can require great deals of engineering effort tobe properly fixed. There is a broad spectrum of effort levels required tofix any particular vulnerability.

So, you see, setting an arbitrary time period for disclosing avulnerability is not responsible at all.

Instead, the period of time should vary depending on the nature of thevulnerability itself. Forcing some time period into the process is likeholding the proverbial gun to the head of the product vendor, and thatcan't possibly result in the sorts of patches we all want for oursystems.

Even if you work with a product vendor to negotiate an appropriate amountof time for disclosing a vulnerability, the next issue in disclosingresponsibly is what information to disclose. Most CERT-like organizationshave formats for vulnerability advisories that do a good job here.

The most controversial topic here is how far to go in disclosing. Forexample, is it reasonable to disclose an example of how to exploit thevulnerability? Again, I look to my litmus test, and I err on theconservative side: It makes the overall problem bigger to discloseexample exploit code. Now, I realize that a lot of you are hissing andspitting at me right now, and I can accept that criticism. My opinion isunchanged by it, however.

There are good ways and bad ways of disclosing vulnerability information.If we all share the goal of having secure applications that have hadtheir known defects fixed, then it's in our collective best interest toallow the product vendor the necessary time to properly engineer and testsecurity patches. Otherwise, we're condemned to an existence of gettingsecurity patches that have been developed under duress, cause otherproblems, or just plain old don't work right.

Come to think of it, that is a pretty accurate description of many of thepatches that we see from all too many of our software product vendors,and that's no coincidence.

JOIN THE DISCUSSION

Loading Comments...