flak rss random

heartbleed in rust

More ghostly followup. There was a thread on Hacker News wherein it was claimed that using rust would have prevented Heartbleed. Specifically, it would not have even compiled. That sounds like a challenge!

The thread starts about here. I don’t mean to single out the participants, but the claim about preventing Heartbleed is nicely specific. Unlike vaguer claims about memory safety in general, this is a particular claim which we can test.

Now, I don’t intend to write a full blown TLS stack in rust, so I will have to take some shortcuts and reduce the scope of the problem slightly. Hopefully the simulacrum retains the essence of the problem. Simply stated, our objective is to write a program which reads a file (packet) from the filesystem (network), and then echoes it back. The length of the echo request will be encoded as a single byte with data to follow. This is equivalent to the TLS heartbeat functionality. Our program will operate on two such packets, yourping and myping, and write out yourecho and myecho. If any data from your packet bleeds into my packet, we have a problem: heartbleed.

We’ll begin with a basic rust program.

use std::old_io::File;

fn pingback(path : Path, outpath : Path, buffer : &mut[u8]) {
        let mut fd = File::open(&path);
        match fd.read(buffer) {
                Err(what) => panic!("say {}", what),
                Ok(x) => if x < 1 { return; }
        let len = buffer[0] as usize;
        let mut outfd = File::create(&outpath);
        match outfd.write_all(&buffer[0 .. len]) {
                Err(what) => panic!("say {}", what),
                Ok(_) => ()

fn main() {
        let buffer = &mut[0u8; 256];
        pingback(Path::new("yourping"), Path::new("yourecho"), buffer);
        pingback(Path::new("myping"), Path::new("myecho"), buffer);

The above program does compile, albeit with some warnings because I’m a lamer and used std::old_io. (The custom allocator in use here is called “array on stack”.) It’s not great code, but it’s not especially tortured either. I didn’t resort to unsafe FFI calls to C memcpy, for instance.

Let’s see what it does with some sample inputs.

$ echo \#i have many secrets. this is one. > yourping
$ echo \#i know your > myping
$ ./bleed
$ cat yourecho
#i have many secrets. this is one.
$ cat myecho
#i know your
secrets. this is one.

That’s a bingo. Your secrets bled into my echo.

Of course, no true rust programmer would ever write a program like that, so perhaps we haven’t yet demonstrated heartbleed in rust.

Let’s take a break from rust to consider the equivalent program written in C.

#include <fcntl.h>
#include <unistd.h>
#include <assert.h>

pingback(char *path, char *outpath, unsigned char *buffer)
        int fd;
        if ((fd = open(path, O_RDONLY)) == -1)
        if (read(fd, buffer, 256) < 1)
        size_t len = buffer[0];
        if ((fd = creat(outpath, 0644)) == -1)
        if (write(fd, buffer, len) != len)

main(int argc, char **argv)
        unsigned char buffer[256];
        pingback("yourping", "yourecho", buffer);
        pingback("myping", "myecho", buffer);

Survey says no true C programmer would ever write a program like that, either. Now where does that leave us?

code no true C programmer would write : heartbleed :: code no true rust programmer would write : (exercise for the reader)

The point here isn’t to pick on rust. I could have written the same program with the same flaw in go, or even haskell if I were smart enough to understand burritos. The point is that if we don’t actually understand what vulnerabilities like Heartbleed are, we are unlikely to eliminate them simply by switching to a magic vulnerability proof language. Everyone may have heard about Heartbleed, but that doesn’t necessarily make it a good exemplar.

Perhaps Heartbleed is a just a stand in term referring not to Heartbleed itself, but rather any number of other bugaboos. I’m not sure that’s better. Vulnerabilities like Heartbleed but not too much like Heartbleed is a poorly defined class. It’s hard to assess any claims about such a class.

When speaking about vulnerabilities and how they can be resolved, we should try to be precise and accurate. The hype around Heartbleed (and ShellShock, etc.) makes them attractive targets to latch an argument on to, but in doing so we must be careful that our chosen example fits the argument. Misidentified problems lead to misapplied solutions.

Opinions seem to differ about the defining characteristics of Heartbleed. The third paragraph describes a rough heartbeat equivalent functionality, but could have been clearer in outlining the basics of the flaw. Reused buffers containing sensitive data leak info due a missing valid length check. The flaw doesn’t require us to read past the end of the buffer, only the most recently initialized portion of it. It does require reusing the same buffer, accomplished here by using a single stack buffer, although a collection of recyclable buffers would also work. Typically, one would not expect secret data in a heartbeat ping packet, but typically the same buffers would also be reused for normal data traffic as well. Will rust programmers ever resort to such tricks? Apparently even malloc was too slow for OpenSSL. Defining Heartbleed as “the bug that leaked private keys” is rather too narrow, focusing on one consequence and not the mechanism of the flaw. How many server keys were actually compromised (the one test server?), versus how many people were dumping Yahoo passwords and tokens within hours of the announcement?

Regarding that last point, I’m surprised how many focused on the private keys to the exclusion of everything else. Even with Yahoo’s private key, I wasn’t in a position to intercept their traffic. But usernames and session cookies? Those I could use from anywhere. Or SMTP. Many connections are upgraded with STARTTLS, but without authentication. Anyone in the position to execute a MITM with a stolen key could simply strip TLS. Heartbleed, however, allowed people from around the world to read any email I had recently received.

Interestingly, despite the obvious parallels to Heartbleed, the recent X server XkbSetGeometry info leak is probably a better example of a bug that rust would have prevented.

For further reading, the JetLeak vuln in Jetty is practically identical to Heartbleed, except it occurred in Java, a nominally memory safe language.

One might also consider one of the bugs CloudFlare found in their Go DNS code. “The catch was that our ‘pack and send’ code pools []byte buffers to reduce GC and allocation churn, so buffers passed to dns.msg.PackBuffer(buf []byte) can be ‘dirty’ from previous uses.” Oops.

Tony took another look at Would Rust have prevented Heartbleed?. I think it’s a good post, summarizing the issue and clearly breaking down the difference between Heartbleed and “Tedbleed”. But again with the private key fixation. Worst case scenario for Tedbleed is “An attacker can recover arbitrary plaintexts from encrypted traffic”. I don’t think it gets much worse than that. I certainly don’t agree that Heartbleed is “a lot worse” than that. (I’ll also quibble with Heartbleed being out of bounds pointer arithmetic, but that’s a lesser point.)

So can this ever occur in real rust code? Start with this change to claxon. Introduce buffer reuse. “Fortunately this is safe to do.” Alas, not entirely. claxon: Malicious input could cause uninitialized memory to be exposed.

Posted 02 Feb 2015 06:37 by tedu Updated: 19 Jun 2019 22:56
Tagged: c programming rust security