Fuzzing at 18: Still Crude, Still Effective
Throw garbage at an application and what happens? In plenty of cases it'll crash.
Today, that method of attack is becoming increasingly popular as a critical component for security researchers and application developers. Its new moniker: "fuzz testing," or "fuzzing" for short.
"I describe Fuzz testing as the stone axes and bear skins version of testing," said Bart Miller, a professor of computer science at the University of Wisconsin. "You throw a bunch of junk at a program and see if it explodes. It's not a form of testing that can substitute for really thorough case-wide testing, but it's fast and really easy, and as part of your testing toolkit, you should really always do it. It's a primitive form of testing, though it is crude but effective."
Miller knows of what he speaks. It was in 1989 that he first coined the term in a research paper, following a grueling night spent trying to connect to his campus's Unix system over a dialup modem connection.
"I was getting noise on my modem, because the modem didn't have error correction, but what I noticed is that the line noise garbage was getting into standard Unix programs and causing them to crash. That really surprised me," Miller told InternetNews.com. "You figure that the standard utilities that you use on Unix everyday must be robust, yet they weren't."
Miller then set out to study the phenomenon with the help of his graduate students. He gave them the task of writing something that would generate a lot of random junk with appropriate variations and feed it as many different standard Unix programs on as many platforms as possible and see what exploded and what didn't.
"While I was writing this, I had to come up with some kind of name for it," Miller said. "So without looking too widely, I came up with the name 'fuzz,' and it just fit."
18 Years On, Fuzzing Still Works
One might be tempted to think that after having been known for 18 years, fuzzing wouldn't be an effective method for breaking applications these days. But it turns out that fuzzing in 2007 is as effective as ever.
"With so many different 'moving parts,' [in] software on computers today, it makes sense that fuzzing is used by so many with such good success," Ken Dunham, director of global response at iSIGHT Partners told InternetNews.com.
"Take for example the WMF [Windows Metafile] vulnerability," he said. "Some hackers 'fuzz' WMF files and get an unexpected result. They dive into it further and the next thing you know they have a successful silent execution of code situation on their hands."
Even among vendors whose core business revolves around more elegant forms of code-quality analysis, fuzzing is seen as an approach that remains useful.
Ben Chelf, CTO at code analysis vendor Coverity, said a new generation of testers is rediscovering fuzzing-based random input response analysis and is improving on its techniques and tools.
"From our experience with the defects that we've discovered in analyzing more than a billion lines of code statically, we know that programs tend to be well tested on their common inputs and less well tested on uncommon inputs," Chelf told InternetNews.com. "Fuzzing can provide a subset of these uncommon inputs and hence, triggers program misbehavior that other testing methodologies might miss."
Brian Chess, founder of code analysis vendor Fortify, added that fuzzing is good at finding relatively "shallow" bugs -- bugs that don't require establishing a complex program state. As it turns out, a lot of bugs, especially ones in input handling code, are relatively shallow.