Blogger: Pete Lindstrom
… or so says “Armchair Economist” and Economics professor Steven Landsburg in his latest book with the same title. As a security professional I can say unequivocally that I want to be as safe as possible!
In his book, Landsburg talks about the “communal stream” principle – the simple notion that everything we do affects everyone else in some way. In the sex example, the idea is to slow the spread of AIDS by increasing the incidence of sex among partners who are more likely to be uninfected. There are a number of other examples discussing leaf blowers, home security, and pollution that follow the same idea.
The communal stream principle is often expressed in another more common way: we don’t live in a vacuum. It is up to security professionals, then, to determine what the effects are of various actions within our sphere of influence and also for the Internet at large.
And that brings us to the question: Does publishing more information about vulnerabilities make us safer? The impact of vulnerability discovery and disclosure takes on a whole new light when we consider the tradeoffs. Indeed, Landsburg’s subtitle is “The Unconventional Wisdom of Economics” because the results are often surprising to folks that have only superficially considered the consequences. And the Internet’s scalability only serves to accelerate the issue. It’s like giving somebody in Melbourne, Australia strep throat from a loft in New York City.
The communal stream effects are not always apparent with vulnerability discovery and disclosure. If we keep in mind that risk is a function of threats, vulnerabilities, and consequences, then we can also see that if any of these is zero there is no risk. Let’s assume consequences are non-zero and that the vulnerabilities become greater than zero either when they are coded or when an enterprise deploys a vulnerable system. At this point, the only element keeping risk at bay would be a threat value of zero.
Thus, the initial discovery of a new vulnerability does nothing to the vulnerable state of a system. Instead, it starts the risk engine by making threat non-zero. And since we have no infallible pre-cog system of determining motives (consider the similar notion of the insider threat), every new person that learns of the vulnerability increases the threat and corresponding risk. As much as we would like to know about new vulnerabilities, there is no escaping the impact of public disclosure – it is like throwing gasoline on the fire.
Although we know that discovery and disclosure of vulnerabilities increases the (communal stream) risk in the aggregate, it also creates a specific opportunity for protection – the patch. This is where things get interesting… and difficult. For every patch applied the aggregate risk is reduced by a factor the size of the threat – in this case, the number of actors that might exploit the vulnerability.
Since the goal of discovery/disclosure is to reduce risk but the immediate result is to increase it, the real question is, can you reduce your amount of risk to a point at or below where it was before the discovery and disclosure? Some would say “no” because threat was zero by definition. However, it does seem reasonable to assign some level of threat-risk prior to public disclosure simply because one can never be certain that somebody hasn’t already discovered the vulnerability. If an enterprise can patch 100% of systems, bringing the vulnerability state to zero, then it can reduce risk from the level prior to disclosure. In cases where 100% patching (or other protection) is not possible, the product of the remaining number of systems multiplied by the number of threat actors must be less than it was prior to disclosure.
There are two other factors that should weigh in on this equation. The first is the addition of risk associated with new software. For every patch that increases the complexity of software with new lines of code, it is reasonable to expect the vulnerability number to increase as well. So the expectation must be that the decrease in risk associated with plugging the known vulnerability is greater than the amount of new risk introduced by the patch.
Perhaps the most difficult factor to address is the risk foregone due to the increased awareness of secure coding issues. Advocates of discovery and disclosure often cite this as an important point, perhaps even the most important reason for the process. But the comparison they make is always to an absolute, unchanged state, as if there were no other way for developers to learn secure coding practices. Given that the overriding assumption in this entire process is that risk is high to begin with, there would be incidents to follow that would provide the same information at a slower rate. This essentially negates the long-term effect.
The issue of discovery and disclosure is an ongoing one full of debate and rhetoric. It is only when we get to a fuller understanding of risk that we can fully assess the value proposition. In the end, discovery and disclosure is like junk food to the industry – it tastes really good and satisfies short-term desires, but it really isn’t healthy.