warning: implicit backdoor
One way to slip malicious code into a project is to hack into their build server and just drop it in. Messy. Another way is to hack a trusted developer’s machine and alter the code there so that they commit it, but it might get spotted during code review. A third way is to become a developer, then yourself commit a seemingly innocuous patch containing an obfuscated backdoor. This is sneaky. Even better is to have somebody else intentionally commit the backdoor for you.
code
Consider this code to allocate some buffers.
void *
allocatebufs(int num)
{
size_t limit = 256;
if (num > limit)
return NULL;
return malloc(num * 64);
}
This isn’t top quality code, but it’s totally safe and secure. It does however trigger a warning about signed vs unsigned comparisons. Many developers don’t like to see those. Some will even try to fix it.
void *
allocatebufs(int num)
{
size_t limit = 256;
if (num > (int)limit)
return NULL;
return malloc(num * 64);
}
Now the warning is gone. And they’ve introduced a serious security hole.
If you’re a sneaky bastard, you might write the first code and submit it, knowing that a trusted developer somewhere down the line will alter it. And you’ve got perfectly plausible deniability. Your code was secure. They introduced the bug.
thoughts
This is just a thought experiment, and you can dissect it with the razor of your choosing, but what I think is interesting is the paradox of plausibility. What happened? The most likely explanation is the mundane one, that it’s just an accident. People introduce bugs like this with alarming regularity. No reason to suspect foul play. But it’s the dependable regularity of such errors that make the attack possible. If people didn’t introduce bugs fixing harmless warnings, the attack would never succeed.
(There was a concrete incident, somewhat similar, although this is not meant to be a comment on any particular patch or fix.)