Blogger: Ramon Krikken
There comes a point in every security professional’s career where you think that certain aspects of secure application design must now be like common sense – everyone’s doing it. But then one morning, reading whichever syndicated news feed, you see an item with a screen shot: an application’s administrative interface with a list of people’s names, addresses, phone numbers, email addresses, and … passwords … in plain text … for the whole world to see. Some of these passwords were reported to be the same as the password used on PayPal accounts associated with the listed email address. Oops, our bad – luckily nothing more than a ‘minor hacking incident’, at least according to the site owner.
I’m talking, as some might already have guessed, about the recent hack of Bill O'Reilly's 'premium members' web site administrative interface, which purportedly was in response to his comments on the Sarah Palin email hack (Mark Diodati covered the Palin incident in this post over at the IdPS blog.) I certainly don’t think it’s very nice – or legal, for that matter – to perform such a hack and then post unsuspecting people’s contact information and passwords for all to see, but I’m also not very impressed with the poor application design that allowed the information, and passwords in particular, to be exposed like this. Not to mention that the interface was apparently not protected, allowing it to be found by a brute-force enumeration of web site URLs, but we’ll ignore that mistake for purposes of this post.
Now, I’m not writing this to pick on Bill and his design team, and I also wondered whether this is just so basic a mistake that it wouldn’t even be something interesting to write about. After all, in most enterprises a flaw like this wouldn’t pass QA, if it even got that far. However, It’s not too long ago that credit card companies printed full card numbers on statements, or that my whole social security number was displayed on some web page (and worse, used as the ‘subscriber identifier’ for an insurance company.) So most of those were fixed, but even now the ‘secret answers’ to the ‘security questions’ used for resetting a main password are displayed in plain text on many a ‘my account’ page.
What all of these issues have in common is that they are violations of least privilege or need-to-know principles (which in my opinion includes ‘already-should-know-so-please-don’t-repeat’) in the application’s business logic, and it happens a lot more often than it needs to. It’s not that the problem has been ignored, it’s just that there were more pressing issues to deal with. As a result, business logic flaws like these are not the first thing that comes to mind when we think of ‘application security’ – the onus is still very much on the developer in writing the code, less so on the business analyst developing the requirements, and even less on the business users themselves.
We’ve been very focused on the underlying technical vulnerabilities such as those in the OWASP top 10, which makes sense given the current threat environment. If we didn’t address these problems, there simply wouldn’t be such a thing as web application security. However, it’s definitely time to start working more and more on the business logic side of things. One of the advantages, if we manage to start early enough in the SDLC, is that we may not even have to worry so much about technical controls – we’ll simply have decided that a piece of information will not be stored, or that we’ll mask part of certain information in order to reduce exposure.
Least privilege and need-to-know are powerful security principles we can apply to reduce the amount of information that is exposed to begin with --- let’s make sure we advocate its use to the business customers.