When businesses are plunged into chaos by service outages or security breaches, poor quality software is often the culprit.

So two vital questions to ask are these: How can you get an idea of the overall code quality of the applications and mobile apps your organization produces via in-house software development? And how can you ensure that as the codebase changes over time the quality improves rather than deteriorates?

Getting the answers to these questions is not straightforward. While it's a truism that all applications have code defects in them which can lead to serious security vulnerabilities, these vulnerabilities are not easy to spot.

Often they may only be discovered by chance a long time after application development has taken place - like the Goto Fail, Shellshock and Poodle bugs. In other cases they may be introduced in an update long after the original software was released, as was the case with the Heartbleed vulnerability written in to the OpenSSL encryption protocol unwittingly by an unfortunate German software developer.

Static Analysis Pros and Cons

One way of getting an idea of the quality of the software you develop in-house is to run the source code through a static analysis system. This can spot and highlight a range of defects such as lack of input validation. (Failure to validate input can lead to unexpected data being stored in variables, which can in turn lead to buffer overflow vulnerabilities.)

You may find the results alarming, but a large number of reported defects is normal. Code testing firm Coverity's most recent annual report on its scans of open source and proprietary software typically found defect levels of around 0.65 defects per 1,000 lines of code.

One problem with static analysis tools, though, is that they often yield too much information. That means your developers can be swamped with irrelevant defects or even false positives, making it hard to spot and fix the most important ones.

An alternative approach is to have your code audited by a suitably qualified team of outside experts. This sort of exercise can be valuable - and is the approach that was taken by Truecrypt to establish whether its encryption software could be trusted.

A disadvantage of this approach is that code auditing is expensive and can take months or years to complete. And while a code audit may yield critical defects that need to be addressed immediately, it only analyzes the code at the time of the audit. Subsequent updates may introduce new and unforeseen security vulnerabilities into the code. Static analysis, by contrast, can be carried out every time changes are made to a code base.

Measuring Software Code Quality

A possible solution is to use the results of static analysis as a way of providing some sort of general measurement of the quality of code produced in your application development initiatives. This can be done by producing a score related to the amount of defects in an application - a grade point average, if you like - and then monitoring that score over time to ensure that updates will improve this grade point average rather than making things worse.

That's the approach taken by CAST, a software analysis and measurement company with headquarters in New York and Paris. The company carries out static analysis on code, but concentrates on the statistical analysis of its findings to get an idea of code quality.

"What we do is make stuff measurable," said Lev Lesokhin, head of strategy at the company. "Release to release, if there is a change in the health factor (or grade point average) we can tie it back to a specific change in the software."

There's another way that automated code analysis can be used to improve quality and, specifically, security. Lesokhin said that government agencies such as the Department of Defense have concluded that most systems can be penetrated using novice hacking techniques, so companies need their applications to be built accordingly.

"The focus has to be around what you can do to protect yourself against someone in your network. Info sec and app sec are going to be key," he said.

How can code analysis help? It can enable companies to measure their code compliance to secure architectures, he explained.

"Any time that development happens, you need to make sure that, for example, you don't have direct calls to your data; you only go through a secure Web service," he said. "So you can set up a custom rule (in your code analysis system) and then check your code to make sure that nothing bypasses that."

Application Security and Quality

CAST offers security metrics and also measures four other quality factors in code: robustness, performance, changeability (how flexible and scalable the software is) and transferability (which looks at factors like naming conventions within the code, to give an idea of how easy it would be for another developer to take over the code).

These factors, together with security, can be used to build a total quality indication for a piece of software, which can be used in a variety of ways, Lesokhin explained. "You can use these to benchmark your entire portfolio of software, and you can put these measurements into contracts (with outside developers or software vendors)."

Doing so allows you to identify which applications in your portfolio need attention first, and also helps you ensure that any new software that you buy or have developed by contractors provides the level of quality that you specify. "That can be important if you beat down the rate card, because then you may not get the best quality coding guys working on your software," Lesokhin pointed out.

Using static analysis tools won't get you error-free code; they will inevitably find more defects that you can fix. But by using static analysis as a quality measurement tool, you can at least try to ensure that over time your software portfolio is getting more secure, and the overall quality of your codebase is increasing.

Paul Rubens has been covering enterprise technology for over 20 years. In that time he has written for leading UK and international publications including The Economist, The Times, Financial Times, the BBC, Computing and ServerWatch.