Security Through Obscurity: Why we should give up on it right now

Security through obscurity is the premise by which one seeks to hide the implementation of a system as a way to further secure a system (be it software or hardware that you are seeking to protect). Intuitively this seems like a good thing – I mean if we had a series of security doors one of which contained a thing that we were trying to hide then surely not telling everyone which door it was behind would make sense. However Kerckhoff’s principles of security states that we should never rely on a system like this.

The door analogy breaks down as we start to look into it further. It only really holds up if we consider that the type of lock on the door is untested but works differently from all other locks (obviously we would have expected the manufacturer to have given a bit of light testing at some point!). It’s not to say that it would fail under a serious attack but the fact is no one has ever tried – so how can we know that it wouldn’t just give way under heavy load. It makes a weak point in our system with false confidence. We could stand back and say well the the door is surely further secured by the fact that very few people  know how the lock works. Really all we’re doing with this approach is making a new secret for our system that few understand and most think is secure.

So the problem becomes the fact that both the key and the way the lock works are now the way the system is secured. Moving back to the realm of computer encryption the parallels are the algorithm used to encrypt a cyper-text and the encryption key. The problem is that a system secured like this relies on two separate items being kept secret. Really if we just keep one secret (ie. The encryption key) then we make it easier for people to understand how the system works. Far from being less secure, not only does it allow people to test and validate the method of encryption, but it also means that people can more easily understand how a system works (and by which I mean regular people who use the system).

If people understand how a system is secured then they are more likely to take responsibility for securing it. This is by far and away the biggest thing in security at the moment. With obscure and not well known methods of securing networks and data people are tempted to say “well how can it be my problem? I don’t even know how it’s supposed to work” but if people are educated about how a system works then they can actually help (through observation and maintaining security practices) to keep the system more secure.

By way of illustration let us consider a guy who is given an encryption key on a data stick – he needs the data stick to decrypt something at his destination (let’s say he’s travelling by train). He doesn’t concentrate and leaves his encryption key on the train – but it’s not that big a deal because he knows that someone would need to have access to the algorithm to be able to decrypt the data so it’s ok. Really we should instead be focusing on how to make it easy to replace the keys in the event of this happening. To replace a secret method of encryption is a huge amount of work (if you consider a large enterprise architecture that has become reliant on it).

Another problem (as with most security problems these days) is that relying on people to look out for security and report it is that it relies on people who will always be the weakest link in the chain. However using open security systems (such as Kerckhoff describes) that are easy to understand the points of failure, rather than making things less secure, actually makes a system more secure because people are able to help fix or mitigate the risks of their own flaws (namely their humanity).

So my rallying call is that “Security through Obscurity” has got to go – it does no good for anyone in the end.

Posted in Security, Software

Leave a Reply