Two quarters ago, when I did research on general SDLC security, and again this quarter while doing research on the static software analysis space, I find the lack of numbers disturbing. With all the organizations starting or running their software security programs, and with the tools having been available for quite some time now, one would expect more than anecdotal evidence on their effectiveness. It’s not that the numbers haven’t been run by some, but few – if any – are both publicly available and sufficient to base decisions on. Part of the problem is that it’s just such a huge undertaking to evaluate process and tools, another that aggregated and averaged results only fit the average software portfolio and threat environment (if such a thing exists).
The good news however, is that people are working on this. We are continuing to talk to customers about their programs, pain points, and future plans - more research will likely follow. And in the meantime, I will be following the next round of the NIST Static Analysis Tool Exposition, the ongoing work at OWASP and WASC, and I very much look forward to seeing results from Cigital’s BSIMM Begin survey project that just opened for participation. With support from customers and vendors we’ll surely get closer to figuring out how to create better software in better ways. If you know of, or are planning other such evaluation projects and surveys I would be very interested to hear about them.