flak rss random

404 Found

When a web page (or other resource) cannot be found, a web server is supposed to return code 404, Not Found. Additionally, it can return some other content for a human viewer. And so, if you visit https://mastodon.social/honktime](https://mastodon.social/honktime) with a browser, you can watch a tooter tantrum, but requesting the same URL with curl displays < HTTP/2 404.

Normally, humans will recognize this as an error page, possibly by reading the “The page you are looking for isn’t here.” text at the bottom. But maybe they won’t. Consider https://firstlook.firefox.com/proxy/v4/](https://firstlook.firefox.com/proxy/v4/). If you visit that page in a browser, you will see, after assorted scripts load, a page talking about something which does not look like an error page. If you download it with curl can see a 404, however. The same result occurs with the OpenBSD ftp client.

Requesting https://firstlook.firefox.com/proxy/v4/
ftp: Error retrieving file: 404 Not Found

Is this an error page? (Some poking around leads me to believe the correct URL is https://firstlook.firefox.com/proxy/v4](https://firstlook.firefox.com/proxy/v4) without the trailing slash.)

effects

A few consequences I’ve noticed.

Normally browsers do not cache 404 results. And so visiting the page twice results in downloading it again. When visiting the correct URL, the server returns a 304 Not Modified status, skipping the data transfer.

The link was fairly popular, being passed around by humans, which is how I found it. However, it’s probably invisible to search engines and other indexing software which sees the 404 error code.

I’m not sure how or why one would configure their web server to do this. Probably a bug, though it seems kinda bizarre. Web stuff is complicated.

It demonstrates the difficulties in making computer and human communication meaningful to both. A lot like programming, in fact. There’s code, which the computer sees, and a comment explaining the code, which the human sees. What happens when they disagree?

We have introduced many layers of abstract friendliness, that even when something goes wrong, we fail to recognize it and treat it like a perfectly normal result. If browsers failed harder, 404 errors would be less friendly, but links like this would fail to propagate. The error would be noticed and corrected.

Posted 04 Jul 2019 18:55 by tedu Updated: 04 Jul 2019 23:21
Tagged: web