code integrity vs data security
Adding or improving support for TPM isn’t a bad idea. It does improve security by some metrics. However, there are various caveats and limitations, and it does not obviate the need for the other kinds of exploit mitigation. TPM has negatives as well, but I’m mostly interested in seeing where even the ideal TPM solution falls short.
TPM can be used to implement DRM. There are plenty of people who will be more than happy to explain how evil this is, but that’s not today’s topic.
TPM can be implemented poorly. If your secure enclave is really an insecure enclave, it’s not of much use. Keys can be extracted, misused, broken, etc. But let’s consider a good implementation, that does what we want and actually does it. What does it do for us?
When I boot my TPM enabled computer (or other device), I know that all the code it’s running is code it’s supposed to be running. There are no hidden trojans or backdoors or other malware lurking about, because that code isn’t signed and won’t run. I clicked on a link and my computer started acting all wonky, so I reboot and now everything is good and clean. Until I click that link again, of course. Maybe someday I’ll learn.
Or maybe we’ve done the impossible and created a secure browser. All the links are safe. The problem is I download a pdf and double click it, but it’s really an exe. If it’s not signed, I won’t be able to run it. So that’s one attack vector sealed off. But let’s be real. 2019 is not the year of the secure browser.
So mostly we’re talking about persistence resistance. The attacker can pwn my computer, run their code, and generally cause mayhem, but they can’t apt get persistence. I can always reboot back into a state where the attacker is not on my system. When you buy a new PC, it comes with Secure Boot, which guarantees that Windows is Windows, but not that Windows is secure. I think there’s still room in the world for systems, and improvements to systems, that try to avoid exploitation in the first place.
We should also consider the distinction between code integrity and data security. Security isn’t a single thing; there are many dimensions: integrity, confidentiality, availability, authentication, etc. If I keep all my secret files on a CD-ROM, and you pwn my computer, you can’t modify or delete my files, but you can still read them, and I am still sad.
The code integrity offered by TPM isn’t much use here. Once the attacker compromises my system, they get to read and copy all my data. They get to hijack my webcam, and by the time I find out I’m a viral youtube sensation, it’s too late for a reboot to solve anything.
Consider the nuclear catastrophe that was the Equifax breach. As far as I’m aware, nothing about that required any persistence. Nothing a TPM would have prevented. There was a bug in struts, attackers used it to run some code, ran some code to copy some data, and that was game over. (There may have been some persistence involved, but it wasn’t essential to the task.) The fact that Equifax might be able to reboot to an uncompromised state is not much comfort.
Another point George made was that software should have a sell by date, like milk. It expires and then you have to stop using it. Would that have helped? There’s a bug in struts, so the code authorities revoke its license and everybody’s server stops running until they patch.
Thus far, I’ve been implicitly assuming that the computer owner controls the keys. I can install anything I want, even with TPM, provided I sign it. But now we’re going to enforce expiry of outdated insecure software? That doesn’t actually work if I control keys, because I’ll just keep resigning old versions until it’s convenient to upgrade. Code expiration only works if we cede signing operations to the Department of Information Control and Key Security.
Now maybe if all those hospitals had been forced to patch before the release of WannaCry, it wouldn’t have been such a disaster, so there’s something to be said for do or die updates . I suspect this won’t be a popular initiative, nobody is even close to being able to provide updates with the required reliability, and the potential unintended consequences are pretty far reaching. Oculus tried it, and people were sad. Also, you’ll be prohibited from changing your computer’s time.
TPM does offer a feature that’s related to data security. You can store your encryption keys in the TPM, and then when your device is stolen, all the data is encrypted. Of course, you don’t need TPM to encrypt your data. It’s a little more convenient, but it’s not impregnable either. More importantly, in the context of this post and network attacks, it doesn’t do much. I have a Surface with BitLocker and TPM, etc., and you don’t need to saw it open to exfiltrate my data. Just send me a link to your browser exploit and read all the already decrypted data.
I’m running out of room, so I’ll note in passing that other hardware devices exist. On many client devices, the most valuable data is credentials to other services. Moving that out of the filesystem, using U2F, FIDO, etc., can be a serious improvement. With some effort, you can use a hardware key for file encryption as well. There’s some overlap between TPM and HSM. The key requirement to add security here is that your hardware key (which might be integrated into the motherboard TPM) has an out of band interface to confirm user intention. If it’s completely automatic and transparent for the user, it will be the same for the attacker.
In short, TPM can help us achieve code integrity, but so long as people remained concerned about data security too, we’ll probably have to work on systems that try to achieve that too.