flak rss random

polarizing parsers

The web as we know it will soon crash and burn in a fiery death. 12 days. There’s even a countdown. This is apparently a redux of request smuggling reborn. Request research reborn redux.

I was a little concerned, because I’ve got some HTTP/1.1 servers here, and oh no, what should I do? To very briefly summarize the original research (which is very interesting, especially some of the techniques for detecting and exploiting the flaw), they send an HTTP request with two Content-Length headers, and then one proxy reads a certain amount while a backend server reads a different amount. Chaos and madness ensue.

Can this happen here? My proxy is written in go, as are most of my servers. So the good news is they will probably parse all the requests the same way, but even so, I believe the design of my proxy makes this attack impossible. It reads a request with ReadRequest, takes a look at some headers like Host and Url to make important decisions, and then writes it to the appropriate backend with Request.Write. Reviewing the source code, special headers like Content-Length and Transfer-Encoding don’t even come from the normal header map.

Whatever the proxy request parser decides the request is, that’s what it becomes and that’s what the backend sees. Portswigger also found a poorly implemented access control bypass using two Host headers, which shouldn’t be possible. What I want to emphasize is that there’s no requirement that go follow any particular standard correctly. It can use the first header or the last header, but once the request has been parsed, it’s fixed. The logic that looks at the host to determine where this request is going next looks at exactly the same string that later gets put into the host header. That’s the only header that will appear on the wire. The parser has effectively polarized the input, and now everyone will see it the same way. The polarizer may be installed the wrong way around, but even so, the output is always coherent.

So what is Akamai (it’s always Akamai) doing that their proxy is putting invalid requests on the wire? Why are we blaming the protocol here, when it’s clear (to me) that the error is the proxy that sends invalid requests? If you put crap on the wire, that’s bad. If your supposed web firewall is the one putting crap on the wire, that’s really bad. Yes, someone somewhere has to deal with the crap input, but why is anything in your stack generating crap? That’s just deranged.

Anyway, 12 days to go, spooky countdown, and then I found out if my proxy is irredeemably broken and needs to die, or if this is just Akamai screwing up again.

Posted 25 Jul 2025 19:40 by tedu Updated: 26 Jul 2025 00:31
Tagged: security web
V
V