National Security Agency/Wikipedia
In a statement that’s certain to convince no one, the U.S. National Security Agency denied that it used the Heartbleed vulnerability to further its spying efforts, following a news report saying it did exactly that for a few years.
The denial, issued via Twitter and a posted statement from the National Security Council, says the agency first knew about the vulnerability when the rest of the public did.
Statement: NSA was not aware of the recently identified Heartbleed vulnerability until it was made public.—
NSA/CSS (@NSA_PAO) April 11, 2014
Update: The Office of the Director of National Intelligence, James Clapper, has now issued a statement echoing that of the NSA. It reads, in part: “NSA was not aware of the recently identified vulnerability in OpenSSL, the so-called Heartbleed vulnerability, until it was made public in a private sector cybersecurity report. Reports that say otherwise are wrong.”
The denial followed a Bloomberg News report citing two anonymous sources who said the agency did indeed exploit the bug for its own efforts. Rather than protect the privacy and security of U.S. citizens and companies who are now forced to spend billions on mitigation efforts, the NSA kept the bug secret and used it.
It’s worth a reminder of what the vulnerability allows: An attacker can use it to gain access to data that is stored in main memory on a server, essentially grabbing random scoops of data that might contain anything — passwords, classified data, messages, pretty much anything.
In choosing to use the flaw and keep it secret, the NSA would have left Internet users the world over, including in the U.S., vulnerable. Who’s to say that intelligence agencies in Russia or China or anywhere else didn’t also learn about Heartbleed before its existence was a matter of public record?
Apparently the NSA has some pretty sophisticated software analysis tools. The vulnerability itself was introduced innocently enough. It was a simple error by a German software developer named Robin Seggelmann.
I asked Chris Wysopal, the chief technology officer at Veracode, a Boston-based firm that specializes in analyzing software code for security problems, about this.
“If you had enough funding you would have a staff monitor critical open source projects and code review the new code in each release,” he told me. “This flaw is something that could be found through a code audit. [The NSA] probably knew about it a few days after its release and started scanning for sites that deployed the vulnerable release. … Think of all the passwords they must have collected.”