This is another from my LiveJournal, written October 14th, 2006:
While reading my new "Netscreen Firewalls" book for work, I chanced upon the following sentence (paraphrased):
"ScreenOS is more secure than open source operating systems, because it's source is unable to be searched for vulnerabilities"
Normally I would ignore such tripe as the rantings of demented mind, but tonight I reflected on it, and on the general outlook of the security through obscurity camp.
As some of you may know, and others may not care, there are (surprise!) differing opinions among computer security professionals.
One camp, the one which I most strongly adhere to, state that open source code, that is, code which can be read by everyone, is more secure due to the many eyes reading the code, searching the code for weaknesses. Eventually the bugs work themselves out, and the code becomes more secure.
The other camp argues that closed source code is more secure, because Evil Hackers(tm) can't go through the code looking for vulnerabilities to exploit (which they, presumably, wouldn't tell the authors about). This is generally known as security through obscurity.
The perpetual fighting between the two camps has shown no signs of slaking off, and I doubt that it ever will. I suspect that the vastly different idealogies between the "share everything" and the "hide everything" individuals will prevent that from occuring.
Here's what I'll tell you though. There are a couple of things that you won't often hear the open source professionals say...such as that security through obscurity does have it's uses, and sometimes it can be handy. Most won't tell you this because they will argue that Evil Hackers(tm) will eventually beat down your security, reverse engineer it, take it apart, and write exploits for it, without your ever knowing it's taking place, and Good Lord when that happens you'll be screwed because you can't even modify the source to the program and fix the hole and you'll probably get fired and have to move to DesMoines and work IT for a pig slaughtering factory to pay for your sins.
Here's the deal. That may, or may not happen. Here's another thing that security professionals won't always admit. Securing a computer system is a gamble. It is gambling in exactly the same manner as insurance is a gamble. Sure, any time you take a measured risk, you must analyze the situation, hedge your bets, and diversify, but in security, how does that apply? It's time to be frank with ourselves here.
A secure government installation will probably have multi-fold security systems. Historically all good security systems involved something you have (such as an ID card, typically) and something you know (such as a passcode). Technology is at the point where we can include "something you are", such as an retinal scan, fingerprint scan, or maybe in a couple of decades, a DNA scan. Someone trying to compromise your account would need to steal your card, find out your pin number, and maybe scoop out an eyeball to get into your system. It's possible, if implausible (and a bit messy), but I'm sure some foreign agents wouldn't hesitate a moment for the right information, and it's that last sentence that seals the deal.
Government secrets are fine, but your shopping list probably doesn't warrant a retinal scanner. My bike used to have a little chain and a masterlock on it. My bank's vault has a door a foot thick. What I'm getting at is you secure something relative to the severity of it's being compromised. A vulnerability in a recipe database might mean a loss of your data, but the exposure of a security issue in my firewall's ScreenOS could mean dire consequences for all of the people who run that particular hardware.
A friend of mine had this as his signature for a while:
"Half of learning any secret is knowing that there is a secret"
Security through obscurity. If we fly under the radar, then the Evil Hackers(tm) won't beat down our door, reverse engineer our software, and write exploits. Probably because they don't really care about our recipes.
Obscurity is a great hiding place for things that people don't care much about finding. Our recipe database, for one. Unless a really bored hacker chances upon it, gets interested, and spends the massive time it would take to reverse engineer it and exploit it, we're probably going to be safe. There would be no return on the time invested.
Obscurity is a really crappy way to hide something that many people are attempting to exploit. Yes, you make it more difficult for the Evil Hackers(tm) because they don't get the source code, but you lose the benefit of having Good Hackers(tm) look at the code too, and fixing it. You are left with your (probably understaffed and overworked) internal development team fixing the holes that they find. Sure, you can license your code out for review, but regardless of who you send it to to have it checked, chances are really good they're not going to be as creative as the community at large, nor as dedicated as a curious hacker bent on understanding your system.
What security through obscurity produces for the interesting target is a loosly tied together community of people cracking away at your security design, testing the perimeters like the raptors in Jurassic Park, and we all remember how that turned out. You might be secure for a while, because you designed your system well, but if there is a vulnerability, and trust me, there most likely is, then once it is found (and it will be), you will be at the mercy of the people who have uncovered the key.
In conclusion (and to review), security through obscurity is acceptible for targets which nearly noone has any interest in. It is not acceptable for interesting targets. If at all possible, you should secure your targets with community produced, time tested solutions, configured correctly and with the proper amount of paranoia, and if at all possible, heap some obscurity on top of that, and hope no one notices you.