Imagine trying to throw a dart at a bullseye that’s 200 feet away with only your bare hands. Now, add a blindfold to the equation. Theoretically, it might be possible. But practically, it’s pretty much impossible—about the same odds as trying to break a new form of software protection called indistinguishability obfuscation.
Indistinguishability obfuscation—similar in nature to a technology IBM has patented called homomorphic encryption—is one of the possible futures of cryptography. The main idea behind it is relatively straightforward: through some arcane math, a team of six cryptography experts have figured out that it’s possible to mask code, making it nearly impenetrable to hacks, while at the same time still being possible to run as useful software.
“Traditional encryption is about securing data,” said Dr. Dan Boneh, an encryption expert at Stanford. “Obfuscation is about securing software—a very different notion than encryption.”
The obfuscator works like this. A developer would code a piece of software, and then for added security, run it through a typical garbler, as most commercial software is these days. Following that, the developer would run the garbled software through the indistinguishability obfuscator—think of it kind of like an encryption application.
“It means that I will be able to give you the code, but you won’t be able to learn how it’s constructed, or about any thing I wish to keep a secret within it.”
The obfuscator takes a part the garbled code and mixes random elements together in such a way that when the obfuscated program is run in its intended fashion, the randomness is negated, and the intended output is generated. If the program is told to do something other than what its designer intended, it won’t work.
“It means that I will be able to give you the code, but you won’t be able to learn how it’s constructed, or about any thing I wish to keep a secret within it,” Boneh said.
Today most closed source code is obfuscated to some degree—not directly accessible to prying eyes. But, such code is nearly always hackable given enough resources, cryptography expert Dr. Zachary Peterson told me over the phone. And online, most code is openly embedded in web pages, and can be easily studied. That’s why being able to use indistinguishability obfuscation to create “black box” software is a breakthrough.
Dr. Amit Sahai—one of the researchers who made the breakthrough, and who has been working on the obfuscation for 17 years—took pains to explain that despite catchy headlines, the claim obfuscation would make software “unhackable” isn’t accurate. The reality is that it still would be possible to crack—just that with current technology it would be damn hard.
“You might get lucky and hit the dart board, it’s just a hell of a lot harder,” he said. (The dart board thing was my metaphor, but Sahai said it was a good one so we sort of talked around it.)
It’s important to note that right now this new form of software obfuscation is completely impractical to use, Sahai said. It would take weeks, or months to receive a meaningful result from querying an obfuscated program. That’s mostly because of computing power, and the incredibly complex calculations necessary to run obfuscated code.
“We have the first mathematical approach,” he said, “This isn’t coming in the next couple of years to say the least.”
Still, having the ability to develop black box software would allow tight controls on what it’s used for, and potentially could ensure a greater amount of privacy for people using cloud services such as Gmail. For example, if the email service’s ad-serving software package were obfuscated, it would be impossible for either Google or third parties to do anything other than check email keywords and select which ads were served, Sahai explained.
(Of course, for that particular example to succeed, Sahai said you’d have to be willing to trust that Google programmed the black box to do what they claimed it did.)
Indistinguishability obfuscation could be used to plant controls on digital and real-world surveillance technologies. For example, if you’re worried that red light cameras can also double as license plate readers—a controversial technology that the ACLU says may infringe on Americans’ liberties—obfuscated software scanning the images could ensure they would be used only for their apparent purpose—doling out red light tickets.
Another example of its use, also having to do with government accountability, is this: Were the NSA to build an obfuscated black box into a surveillance software packages such as XKeyscore, it could theoretically force agents to use XKeyscore for specific purposes—whether that means preventing XKeyscore searches on American citizens, or whatever other predefined limitations deemed necessary.
Also, inevitably, obfuscation would end up making into the future generations of malicious programs such as malware, remote access trojans, and other hacker tools. While it wouldn’t make malware any tougher to capture, once a researcher does end up getting their hands on it, they will have a much harder time cracking the black box open, Sahai said, and therefore may make it harder to build defenses against.
Those are only a few possibilities of the uses for obfuscation that I discussed with the security researchers I interviewed for this report. There are numerous others including email encryption (by handing over a black box to a friend that can decrypt messages), shorter and more efficient digital signatures—used to sign software patches, for example, which sounds simple but is actually tough and important—as well as solve dozens of what cryptographers call open problems.
Despite the challenges ahead, Sahai pointed out that when RSA’s first encryption schemes came out in the 1970s they were totally impractical as well. Today, they’re seamlessly integrated into machines. Peterson said that while the concept still needs to be proven, the potential is intriguing.
“It’s worth approaching with a healthy sense of skepticism,” Peterson said. “But as far as I can tell cryptographers are excited about its possibilities.”