Mozilla Security Stresses Testing
Johnathan Nightingale of Mozilla's security team argues that testing is key -- and that simply counting bugs isn't a good measure for security.
Developers can learn a great deal from how Mozilla secures its software, according to Johnathan Nightingale of Mozilla's security team, who argues that testing is key -- and that simply counting bugs isn't a good measure for security.
"We've learned a lot of lessons about what works and what doesn't over the years doing this, and we've built a bunch of processes and tools to help us," Nightingale, who bears the title of Human Shield at Mozilla, told InternetNews.com.
In his presentation, Nightingale noted that Mozilla's security group currently numbers about 80 people, of which at least a third are not Mozilla employees. The total team is made up of a core security team, development leads, quality assurance, individual developers and management.
A big part of how Mozilla secures its software is by way of testing often, and regularly, with a number of different techniques and tools. According to Nightingale, Mozilla runs 90,000 automated tests, using eight different test frameworks (called "harnesses") on four platforms, at least 20 times a day.
For Mozilla, Nightingale emphasized that nothing lands in its released software without tests. Still, he noted that often for developers the hardest part about testing is getting started.
"You need to make the investment up front, get religion about it, and start requiring it from your developers," Nightingales said in his presentation.
Mozilla has a number of open source tools that it uses to test the browser. Among them is the Tinderbox system for build- and test-tracking, Litmus -- Mozilla's homegrown system for human testing -- and the Bugzilla bug-tracking system.
Mozilla has been criticized by some security vendors as having more bugs than other browser vendors.
But Nightingale argued that the bug count is the industry's worst security metric of all. In his view, focusing on bug counting creates perverse incentives for security. Instead, Nightingale suggests that more meaningful metrics are measuring the number of days users are exposed to risk as well as the average time it takes to deploy fixes.
To Nightingale, good security is a feedback loop, where at every step of the process, if something breaks or goes wrong, the question 'Why?' must be asked -- and answered.
"At the end of the day, we're a non-profit project trying to help build a better Internet," Nightingale told InternetNews.com. "If giving away those processes and tools helps other projects keep their users safe, that's great news for us."
"If we can act as a proof point in organizations that are resistant to some of the stronger steps we've taken, like broad automated testing coverage, or mandatory code review, then we're delighted to do that, too."
Article courtesy of InternetNews.com.