NSA, Target, Heartbleed and Ethics
It’s no surprise that the NSA may have used the Heartbleed exploit to tap into sensitive encrypted communication, including that of Google. If you understand the nature of how the bug works, it goes hand in hand with undercover espionage.
Heartbleed, the name given to the OpenSSL (Secure Socket Layer) flaw, allows sensitive information to leak (or bleed) from a server to any client connected to it. What makes this even more interesting is that the data is leaked to any computer connected to the server, so there’s no need to hijack someone else’s connection in order to exploit it.
Here’s a simple scenario:
- NSA connects to server with the Heartbleed flaw.
- NSA stays connected, gathering leaked information until it receives the private SSL key of the server.
- NSA stores private key and uses it to decrypt previous and future communication to the entire server’s domain, i.e., google.com.
The worst part is that this vulnerability can be performed without any detection, and without leaving traces behind. It’s important to note that the ability of not leaving any traces behind makes the bug even worse, because administrators cannot go back to determine what was lost.
Now, this could have been used by the NSA, or it could have been used by a hacker. The end result is the same. Snooping and data loss are possible.
I would take this a step further and question whether breaches like Target’s data loss were the result of it. Given that the Target breach happened just a few months prior to the bug being announced, a hacker could have used this flaw to tap into encrypted financial transactions from payment terminals or servers. And since the flaw leaves no traces, it would leave administrators baffled on just how this data was lost.
Recovering from this is not as simple as it might appear. The damage that has already been done from lost data cannot be recovered. But there’s much more to it than that. After applying the necessary upgrades to the system to prevent future data leaks, an administrator must replace core certificate files responsible for encrypting and protecting data. If they don’t, any encrypted data transactions that happen in the future can also be cracked. Once the hacker has access to the private key that may have already been leaked, all bets are off and they have the ability to decrypt data at will.
My fear is that a majority of administrators will apply the patch without completing the final phase of replacing the server’s certificates. Test and compliance tools will come back with a status of compliant without regarding the loss of the sensitive private encryption key files, because they have no ability to determine if they were lost previously. As the news dies down about Heartbleed, hackers could take advantage of this. A full recovery involves following the recovery steps all the way through, which includes the replacement of the certificates.
On the paranoid end of the spectrum, some might argue that the bug was placed there on purpose. However, due to the wide implications beyond snooping of a bug like this, I think it’s very doubtful.
The real question that should be asked is, if the NSA was exploiting this bug that has such worldwide implications, was it not their responsibility to report it as soon as it was discovered? For example, a terrorist or foreign hacker could use this to cause severe damage to a range of industries. It could have been used to exploit consumers’ financial information, such as the case with Target. Would this risk be worth the ability to tap into Google’s data?
How can this be prevented in the future? It can’t. Nothing is ever 100 percent secure, and this bug is just another example of it. Code is, after all, written by humans, and humans are not perfect. What we should continue to do is move from being reactive to proactive about the way we think about security.