flak rss random

real world crypto talks

Real World Crypto 2020 was last week. It’s a conference I like because the talks are usually pretty interesting. The crypto talks have real world applications and the real world application talks have crypto. Afterwards, there’s usually not just something to be learned, but something to be done. I didn’t actually attend every talk, but here’s some notes.

era of tls 1.3

TLS 1.3 uptake is progressing much faster than 1.2. (Or 1.1, which was pretty much ignored entirely.) Much of this is being pushed by a few large companies, CloudFlare, Google, Facebook. Of note, the Facebook app will use TLS 1.3 even on older platforms where the system libraries don’t support it. So this is rather different than the “XP effect” where nobody would do anything if it wouldn’t work on Windows XP.

The researchers’ measurements lagged slightly because as draft versions of TLS 1.3 evolved, they needed to update their software. When they did, they found the network operators had already updated. This is kinda unusual.

bleichenbacher’s cat

Take something old, Bleichenbaher RSA sidechannel, take something new(ish), cache side channels, and break stuff. One might propose we should avoid RSA PKCS v1.5 because while the math suggests it’s secure, it has proven quite problematic for implementations. But this is real world crypto, we’re going to be using it for another 40 years.

Of course, it’s not enough to find a vulnerability, one must demonstrate the attack. This takes a bit of extra work, to downgrade the browser connection from TLS 1.3 (which doesn’t support RSA), and then to attack many servers in parallel. In the end, what they recover is not the secret key for one connection, but the session cookie, although this is equivalently game over.

A few final notes. We should add “death by downgrade” to the lexicon. The attack can be parallelized because many services use the same RSA key on dozens (or hundreds (or thousands (or ...))) of servers. Easy to deploy, easy to attack. While not impossible, the attack may become slightly less practical against a single server.


One of the properties of TLS is that responses are not independently signed. As in, the bank server says “you have $100” but there’s no way to forward that answer to your landlord, except via screenshot etc., which obviously has no verification. This is by design. You can know the server isn’t lying, and you’re not talking to the wrong server, etc., but that knowledge isn’t transferable.

But what if? So the shortcut here is I tell you my bank login, you login, and you see my balance, proof enough. But I’d maybe prefer you not see all my transactions? Or make any new ones?

If we know ahead of time that I want to prove something about the server response, we can work together to complete the TLS handshake, turning it into a three party handshake with some multiparty computation, even though the server is unaware of this. And then using a bit more math that went over my head (zero knowledge joke haha), I can prove that a number greater than 100 appears on the page, but without revealing exactly what that number is, or anything else in the response.

This is interesting work. On the bright side, all those shady “let us login to take a peek” sites will have no excuse to continue doing so. There may be some repercussions though, for anyone depending on TLS lacking attestation. Someone can prove that your Twitter DMs came from you, for instance, without Twitter’s assistance. Not sure how big a deal that is.

this is a robbery

Breaking a hardware security module (HSM). One technique is to pull it apart, carefully, without triggering the tampering countermeasures, etc., and apply funny voltages to it and see what it does. However, devices are certified against such attacks, so this could be tough. On the other hand, HSMs are just little computers that run software, and the software is not certified to not have stack overflows. So you grep the firmware for memcpy and now you’ve got root on the HSM. The keys which must not be accessed are just sitting in memory.

tpm fail

HSMs can also give up their keys via sidechannels. Nonce attacks against ECDSA are old hat, but a few interesting points I picked up from this talk. The Intel fTPM bug had already been fixed in Intel’s crypto library, but the ME team hadn’t updated it. So I think one lesson is if you’re building a supposedly secure highly trusted component, maybe watch your dependencies for fixes. And despite having a ready fix, it took them six months to release an update. The other vendor affected, STMicro, seemed to have never heard of such timing attacks? But they did respond quite well once the situation was explained. Apparently this was the first time anybody had reported a security vulnerability to them. Finally, the processors on these devices are rather slow, which magnifies the effect of timing leaks, making them easy to detect over the network. So the irony is using such a device may have made it possible to leak the key to remote attackers even when there’s no software vulnerability in the host system. Every component added to a system is a potential fault point.

encrypted memory

A quick but thorough overview of approaches to encrypting memory. This is a defense against various side channels where perhaps one VM reads the memory of another VM, but finds it’s encrypted and thus meaningless. Alas, people like their memory to be fast, and encrypting it can slow things down. And thus a series of tradeoffs and shortcuts.

too much crypto

JP argues for reduced round counts. He makes a few points. Attacks always get better, but what does that mean? It’d be pretty weird for attacks to get worse, right? And yet sometimes they do, where the time complexity shoots up as a new attack reduces the space required. But for the most part, progress, especially for symmetric ciphers, has been fairly uneventful. Cryptanalysis does not reduce secure rounds over time like a stream weathering a boulder, eventually reducing it to nothing.

So how many rounds do we need? If k is secure, do we need k+s for a safety margin? But then if k+s is secure, do we need k+s+s for a safety margin? I think he could have pressed this point a little more. We need chacha20 because what if an attack breaks chacha8. But what if an attack breaks chacha20? Doesn’t that mean we need chacha40? Maybe he didn’t want to suggest this for fear somebody would take it seriously. Ideally there would some framework we could plug a cipher and round count into and it would spit out enough or not enough.


Next up we have an attack that got better. Building on the SHA1 shattered collision, we have a chosen prefix collision. This is more troublesome, in that various real world systems can be practically attacked.

This makes for an interesting contrast with previous talk, although I think it’s fairly complementary. The shambles attack is a refinement of a technique developed 15 years ago. The main thing that’s changed though is that it’s now feasible to rent 2^60 or more worth of computing power. DES still offers close to 2^56 security; the problem is that 2^56 security isn’t enough. SHA1 offers a maximum of 2^80 security, which also probably isn’t enough. It’s not clear to me that a SHA1x2, SHA1 with doubled round counts but same structure, wouldn’t be similarly vulnerable to shattered or shambles. The problem with SHA1 seems a combination of a design flaw and small block size, not necessarily inferior round counts. SHA256 (or BLAKE2) have wider blocks, which is more important than round counts. Hardware has been getting better much faster than attacks.

secret sharing

Everybody loves secret sharing. I take my secret, divvy it up, hand it out to my friends, and then when my body washes up on the shore, put them back together to discover the butler did it. Alas, it lacks many properties that we’d like in a real world crypto system. For instance, any of the parties to the recombination can alter the message to say anything they like. So you really need to hope that none of your friends can be turned by the butler, or else the poor poolboy is in for a bad time. Also, if anybody loses a share, it’s kinda annoying to call everybody back together for another crypto party. And then if anybody mixes up their shares, the police are going to be out searching for blawusdc.

And so, math stuff, we have adept secret sharing. Which takes several of the properties journalists and butler employers need and adds them to secret sharing. Would you like to build an implementation?


Firefox wants to know if tracking prevention is working. To do that, they need to know how many trackers are blocked, but sending such data to the server would reveal a profile of your browsing habits. Instead of sending back 0 or 1 (per tracker) we send something like (15, -4, -10) to three servers. At the end of the day, the servers can add up all their numbers to get accurate counts, but no single server can tell where you’ve been or what you’ve seen. However, this is vulnerable to forgery. I can send (-50, -50, -50) to every server and destroy the results. So we include a tiny proof of correctness with each upload, and then the servers all compare notes and can verify that your sum is either 0 or 1, but without learning exactly which.

This looks kinda cool, but comes with a number of caveats. You still really need to trust the servers not to collude. I think it’s more a safeguard against accidental exposure (a database dump of any server reveals little). Or maybe a rogue admin, since the idea is each server is operated independently. But there’s still a strong non-crypto component here, which is just a privacy policy promising that the servers are operated independently. Case in point, all the servers are currently run by Mozilla. Are you trustworthy? Would you like to run one?

pseudorandom black swans

A cache attack against a random number generator. Yet again, a standardized random number contains a flaw that’s outside the FIPS specification. (This was recently updated, though. As of September, you’re not allowed to include side channels.) The flaw is that while T-table AES is known to be vulnerable, it’s fast, and so it remains in use, especially in scenarios where people think side channels are irrelevant.

An interesting part of the talk was developing a threat model where they could practically exploit this. You need a TLS server that can convince a client to authenticate to it in a certain way, which isn’t always the case, but it’s certainly more realistic than imagine 2^118 spherical messages, etc.

post spectre crypto

There’s a lot of noise about various variants of spectre, the umbrella term for speculative execution attacks, and related CPU problems. And now CPUs have mitigations for this or that variant. But when was the last time you heard about spectre v1? Over a year ago? Surely that means it’s all fixed up, right? Nope. You haven’t heard anything because there are no fixes coming. Spectre v1 might be more accurately described as the existence of speculative execution. And it’s a tricky one.

To recap, this is where a branch is mispredicted and reads some data from a cache line it shouldn’t. Introducing a side channel. However, everything occurs in the same process, with no crossing of security or privilege boundaries. Javascript in a browser reading cookies for other sites. BPF in the kernel reading who knows what. The CPU has no way of knowing it’s about to read something that’s internally privileged. Fixing it requires identifying all the possible branches that can be attacker controlled, and fixing them up with lfence or so, which can be really slow. There’s been some progress on better approaches, but they’re not perfect.

This was pretty sobering. When will it be over?


See you in Amsterdam.

Posted 14 Jan 2020 18:13 by tedu Updated: 15 Jan 2020 04:46
Tagged: event software thoughts