near match fast lockout
My phone decided it didn’t like my face and wouldn’t let me log in. Unusually, instead of giving me some retries, it immediately locked me out, requiring a passcode. At first I thought this might be a security measure, but I’m pretty sure it was just a glitch. However, it’s an interesting possibility for an authorization system. Fast lockout after a near match.
Most authorization systems employ some combination of rate limiting and lockouts to thwart guessing attacks. The assumption is that there are too many possible guesses, each one has a negligible chance of success. Dictionary attacks against passwords probably undermine this assumption to some extent, but it generally holds. And there’s weak passwords, like repeating the username. So some caveats. However, let’s assume we have a well bahaved user. I roll some dice and get a totally random password, clamchowdertastesawful. You’re not going to guess that in three or ten tries, but I’m a sloppy typist so maybe it takes a few retries to get it right. Allowing retries is user friendly without degrading security.
The second half of the assumption is that the attacker is guessing blindly. For passwords, we’ll let this slide. But what about biometrics? (Ignoring whether we think biometrics should be usernames or passwords, the reality is they’re used for authorization.) An attacker unlocking a phone probably won’t generate random faces and fingerprints. They’re going to start with some security cam video, social media photos, partial prints from the phone case, etc. This may not be accurate enough to succeed on the first try, but they can iterate.
Notably, the attacker can build their own (less accurate, more forgiving) pass/fail detector to screen attempts so that only viable guesses are attempted on the actual device. Can’t tell from a photo how pronounced the cheekbones are? Make ten masks in 1mm increments, but still verifying that the distance between the eyes is correct and the nose is in the right spot.
To thwart such an attack, we need to recognize a near match as an attack, at which point we do a fast lockout. Assume we unlock the phone if the face matches with 99.9% confidence. A stranger’s face will match with about 1% confidence. That’s ok, we’ll allow retries, so as to avoid annoying the real user when a friend picks up their phone to hand it over. But if the face matches with 99% that’s bad. The real user matches better than that, random faces match much less than that. This is an imperfect clone of the real user. Reject the login and immediately disable retries so that the attacker can’t improve the mask and try again. Most of the time when I get a reject, it’s because I’m holding the phone too close or it catches me looking sideways. I assume the match confidence in such cases is rather lower than 99%.
Now, back to passwords and passcodes. I can very carefully enter my passcode with my phone held against my chest so nobody can see the screen, but somebody standing in front of me can still observe the extension of my thumbs to infer which part of the screen I’m aiming at. Or an attacker might use cool scifi techniques like spying on my keystroke timing with a microphone or mincore. This too allows them to build their own pass/fail filter. Their simulation might spit out clamchowdwersdtesawful as a first guess, but they’re closing in quickly. Or mabe it looks like I entered 123769 but the real code is 123469. Do we want to do a fast lockout? This is probably too user hostile. Real users are also likely to mistakenly enter near matches of their password. We can’t be sure it’s an attack.
Checking for near matches requires we store the original secret data in a format that allows such comparisons. My understanding is that most biometric systems today already do fuzzy matching with some confidence level. So that’s not a problem.
Typical password hashing schemes do not normally support fuzzy matching. It’s cost prohibitive to generate and store all the possible variations. In some circumstances, this may still be useful. I signed up for hotmail with password clamchowder1. I signed up for yahoo with clamchowder2. I signed up for gmail with clamchowder3. Can’t be repeating passwords. Old sites get breached, credentials get stuffed, crap.
Some sites are already pretty proactive about detecting guessing attacks. The security team at gmail can download the yahoo passwords and add them to a blacklist. When they see a login with password clamchowder2, lockout. But we can be even more proactive than that. A password like clamchowder3 plainly looks like an iteration of a base password. We can run the iteration backwards to generate preimages and add them to our blacklist preemptively, before we even learn about the yahoo breach. And forwards, to catch that time I totally did not sign up for addison mashley using clamchowder4.
Another possibility is to have a learning phase where we observe which typos and mistakes the user makes in practice. They are rejected but do not result in a lockout. I’m pretty sure about 90% of the times I fail to login to my laptop are the result of transposing the same two letters. (The other 10% are when I enter a password from another machine, not a near match at all.) If you were to generate guessed passwords from a tick-tack-tick-tock pattern, you’d likely generate some near matches that I’ve never actually typed. That’s one possible mitigation for the unfriendliness of this approach.
That might be something interesting to study. How often do sidechannel derived candidate passwords resemble actual typos? I’m guessing that these are two different sets, but that’s intuition. I have no evidence. There is some prior work in the form of typo correction for passwords, but that’s the opposite of what we want.
Of course, I doubt any of this is worth the additional complexity except in very rare cases.