×
We have made updates to our Privacy Policy to reflect the implementation of the General Data Protection Regulation.

Fuzzing at 18: Still Crude, Still Effective

Download our in-depth report: The Ultimate Guide to IT Security Vendors

For almost two decades, security professionals have known a simple fact about operating systems and the applications that run on them. You can jam invalid data and string input into an application, and in some cases, that "garbage" will cause a system to crash, hang or enable a hacker to execute code.

Today, that method of attack is becoming increasingly popular as a critical component for security researchers and application developers. Its new moniker: "fuzz testing," or "fuzzing" for short.

"I describe Fuzz testing as the stone axes and bear skins version of testing," said Bart Miller, a professor of computer science at the University of Wisconsin. "You throw a bunch of junk at a program and see if it explodes. It's not a form of testing that can substitute for really thorough case-wide testing, but it's fast and really easy, and as part of your testing toolkit, you should really always do it. It's a primitive form of testing, though it is crude but effective."

Miller knows of what he speaks. It was in 1989 that he first coined the term in a research paper, following a grueling night spent trying to connect to his campus's Unix system over a dialup modem connection.

"I was getting noise on my modem, because the modem didn't have error correction, but what I noticed is that the line noise garbage was getting into standard Unix programs and causing them to crash. That really surprised me," Miller told InternetNews.com. "You figure that the standard utilities that you use on Unix everyday must be robust, yet they weren't."

Miller then set out to study the phenomenon with the help of his graduate students. He gave them the task of writing something that would generate a lot of random junk with appropriate variations and feed it as many different standard Unix programs on as many platforms as possible and see what exploded and what didn't.

"While I was writing this, I had to come up with some kind of name for it," Miller said. "So without looking too widely, I came up with the name 'fuzz,' and it just fit."

18 Years On, Fuzzing Still Works

One might be tempted to think that after having been known for 18 years, fuzzing wouldn't be an effective method for breaking applications these days. But it turns out that fuzzing in 2007 is as effective as ever.

"With so many different 'moving parts,' [in] software on computers today, it makes sense that fuzzing is used by so many with such good success," Ken Dunham, director of global response at iSIGHT Partners told InternetNews.com.

"Take for example the WMF [Windows Metafile] vulnerability," he said. "Some hackers 'fuzz' WMF files and get an unexpected result. They dive into it further and the next thing you know they have a successful silent execution of code situation on their hands."

Even among vendors whose core business revolves around more elegant forms of code-quality analysis, fuzzing is seen as an approach that remains useful.

Ben Chelf, CTO at code analysis vendor Coverity, said a new generation of testers is rediscovering fuzzing-based random input response analysis and is improving on its techniques and tools.

"From our experience with the defects that we've discovered in analyzing more than a billion lines of code statically, we know that programs tend to be well tested on their common inputs and less well tested on uncommon inputs," Chelf told InternetNews.com. "Fuzzing can provide a subset of these uncommon inputs and hence, triggers program misbehavior that other testing methodologies might miss."

Brian Chess, founder of code analysis vendor Fortify, added that fuzzing is good at finding relatively "shallow" bugs -- bugs that don't require establishing a complex program state. As it turns out, a lot of bugs, especially ones in input handling code, are relatively shallow.

Fuzzing Tools

Since Miller's first fuzzing tool, countless tools have been developed and used by security professionals. According to TippingPoint security researcher Pedram Amini, fuzzing tools can be broken into two categories, general purpose and target-specific.

Amini and his team at Tipping Point recently came up with their own fuzzing framework called Sulley, which can be customized to audit a wide range of targets. On the other hand, to better secure its own browser, Mozilla developed a fuzz testing tool called jsfunfuzz that is targeted at testing JavaScript inputs.

Browser vendors certainly have a particular need for using fuzz testing. Last year, security researcher H.D. Moore filled his Month of Browser Bugs effort with a number of bugs that came out of his own fuzzing efforts.

In addition to simply trying to crash a program, Amini noted that fuzz testing could help researchers find all kinds of problems: memory leaks, performance degradation, denial of service, format string bugs and integer handling issues.

"Fuzz testing can expose the same classes of risk that a human analyst can -- the trick is detecting the discoveries," Amini told InternetNews.com.

What Developers Should Do

While there are a number of reasons why fuzzing works in the first place, a large cause is that software developers simply introduce errors. Consequently, software can be better protected against the kinds of vulnerabilities that fuzzing exposes if programmers just code properly.

"In more general terms, the No. 1 security mistake that programmers make is to cut corners -- or altogether omit -- input validation," Fortify's Chess told InternetNews.com. "Good input validation makes fuzzing go limp."

Yet using that technique doesn't mean that fuzzing still won't turn up additional errors. Tipping Point's Amini said that getting zero results from a fuzzing test is a tall order, and likely unachievable.

"Developers can certainly cover a lot of ground, however, by applying fuzz testing on their own code during the development lifecycle," Amini said. "Fuzz testing does not require as specialized a skill set as line-by-line source or assembly auditing, which makes it an attractive testing methodology to involve developers in."

The More Things Change, The More They Stay The Same

Taking the long view of fuzzing following the nearly 20 years since he developed the technique, Miller said he has seen some disturbing trends. Even though his research at the University of Wisconsin has been available to the public developer community, the fuzzing studies he has done over the years have found worsening problems in software coding.

Miller said that in 1995, he could hang or crash a quarter of X Window applications on Unix. In 2000, he could hang or crash at least 45 percent of the Microsoft Windows applications he tested. In 2006, he tested Mac OS X and found that he could crash over 70 percent of the items studied.

The reason for these increasing problems? A simple matter of economics, according to Miller.

"There is a constant notion that you have to release software that makes people want to buy the next version," Miller said. "People assume that bug fixes are things you get for free as updates and features are things you buy. So long as we have that business model, people will only make software as reliable as the market demands and no more."

Fortunately for developers, fuzzing remains a useful tool to combat bugs even 18 years after its advent. On that fateful night back in 1989, Miller was using technology that many would today consider to be Stone Age by today's standards, yet the techniques he developed still work.

"The way we're programming today isn't all that different from the way we did 25 years ago. We're still using languages that are error-prone," Miller said. "We're writing software that is more complex and the tools are only incrementally better. I don't know if we're overall keeping up or if we're slowly drifting behind."

This article was first published on InternetNews.com.

Submit a Comment

Loading Comments...