Software security - or application security, if you prefer - is (to no surprise, I'm sure) a significant ongoing research topic for us. Most recently I completed two documents on static software security analysis, which should publish in the coming 90 days. Talking to users and vendors many important aspects of the static analysis practice came to light, and it is clear that it provides a tremendous value in reducing software security flaws. Most important, of course, is that tools find those "holes you can drive a truck through." I.e. the ones that would be particularly embarrassing to make it into production. And although one would expect these to be easy to find, they may not always be.
There have been various studies of static analysis accuracy, both in the academic and commercial world. Not all are public, and with some of the public ones the detailed results are unfortunately not publicly available. We also spoke with customers about their product evaluations, and even have one customer who performed an incredibly detailed test of several types of security testing tools and services. The results show non-trivial variability in accuracy, inconsistency in the relative performance between products, and potential impact of code structure factors (such as complexity). One study pegs best-case tool performance around 60% for critical flaws. And while this by no means diminish the usefulness of static analysis, it may certainly complicate a customer's evaluation process. Many only perform a comparative analysis of tools run on enterprise code, and thus do not have a baseline to compare against - a method which may or may not actually provide the right decision-making information (but there is often no cost-effective alternative).
A big question of course is how much accuracy really matters. After all, many who use the static analysis and dynamic testing tools are finding large numbers of vulnerabilities already. Being able to work with the results to efficiently identify root causes and put in place effective fixes may for them well be more important than finding even more. Still, the results of the various studies should serve as a reminder that accuracy in the field may be less predictable than one would think, and that ongoing improvements (and more studies) are desirable.
On a related note, the Catalyst workshop Application Security: Process, Tools, and Architecture is now available online at the Burton Group Institute web site. In this workshop I provide an overview of common application security failings and describe at a mid-level detail the components of a software security program.