flak rss random

the three line single binary compiler free blog

A silly experiment that quickly ended up deep in the rabbit hole.


The new hotness is a single binary blog, with all the posts embedded inside. There’s a few approaches people have used to get the posts included into the final executable, but they all involve recompiling. Who has a compiler? That’s so insecure. Hackers use compilers. Let’s build a blog that doesn’t require recompiling everything just because you noticed its misspelled.

package main
import (

func main() {
        z, _ := zip.OpenReader(os.Args[0])
        http.Handle("/", http.FileServer(http.FS(z)))
        http.ListenAndServe(":8080", nil)

Discounting boilerplate, that’s only three lines. We re-open ourselves as a zip file, then serve that as a file system. This works because zip files include the “header” at the end of the file. Essentially, we’re building a self extracting zip archive, except the extractor happens to have an http interface instead of the command line.

Assembling the final result requires only a few more commands.

go build selfie.go
cat selfie blog.zip > server
chmod u+x server

After that, we never need to recompile selfie. Whenever we update blog.zip, using the zip tools of our choice, we cat them together again, and now we have a new blog server with all our posts. (Fun tip: the > redirection doesn’t change file mode, so chmod is only necessary once.)



Or it would be, if it worked. If you try it, you’ll quickly discover all you get are “not a valid zip file” errors (or would, if I had included error checking) because go’s zip support is broken. It uses the offsets found in the directory header as offsets from the beginning of the file. This is embarrassing. Like the big deal with zip files is you can find them anywhere, even at the end of other files. Come on, go, get with the 90s.

This is not insurmountable. You have to calculate the difference between the actual zip header offset and where it thinks it is. Something like this, then add filestart to a few places.

d.filestart = directoryEndOffset - int64(d.directorySize+d.directoryOffset)

Now, we’re set, right? Haha, nope, we can open the zip file and successfully get 404 for files that don’t exist, but attempting to fetch a real post only returns “seeker can’t seek”. What does that even mean?

no seek for you

First, this can be worked around by writing our own http handler function instead of the builtin file server. So the idea works, and you can stop here without reading more about the horrors of mismatched interfaces.

        r, err := z.Open(req.URL.Path[1:])
        if err != nil {
                http.NotFound(w, req)
        io.Copy(w, r)

But I want the elegance of a three line solution, dammit. So what’s the problem? Well, compressed data is not trivially seekable, so the files returned by our zip archive don’t implement the Seek method. Fair enough. But why do they need to?

The documentation for http.FileServer says that with the http.FS converter you can use fs.FS, which we’ve done, and that returns fs.File, which we implement: Stat, Read, and Close. zip.Reader is a fs.FS and returns fs.File from Open. fs.File just says “A file may implement io.ReaderAt or io.Seeker as optimizations.” So why aren’t we good? Because http.File includes io.Seeker. If you start going down the doc path towards http.FS, you may not ever read the documentation for this interface.

All there in the manual somewhere.

But wait, we open the http/fs.go source to see what’s really going on. That error message comes from http.ServeContent, which includes a much more thorough explanation of what Seek is used for and what happens when it doesn’t work, but there’s no indication that this is the function actually called by http.FileServer. Ugh.

But still, why this error message? We don’t implement Seek for zip files. Reading through more of http/fs.go, I would expect to hit the errMissingSeek case in ioFile.Seek. The error I expect us to be seeing is “io.File missing Seek method” not the one about seek failing. What Seek method is actually being called? I have no idea. I got lost at this point. Lost in a maze of wobbly types, all alike.

The good news is adding a simple Seek method is enough to get us going.

func (r *checksumReader) Seek(offset int64, whence int) (int64, error) {
        if whence == io.SeekEnd {
                return int64(r.f.FileHeader.UncompressedSize), nil
        r2, _ := r.f.Open()
        rr := r2.(*checksumReader)
        *r = *rr
        return 0, nil

Did I say simple? I meant gross. Anyway, it works well enough for demo purposes. There’s probably a better way to rewind, but this gets it done. And you do need to support Seek(0, 0) because http.serveContent will read some of the file to sniff content type, then rewind to read it again.

Anyway, with sufficient hacks in place, we finally have it. The three line single binary compiler free blog.


Go should fix support for embedded zip files.

Sometimes go’s wobbly types are convenient. You implement the methods you implement, and then consumers can turn that into interfaces if they desire. And then you pile on a few adapters to make everything match up. But man, when it goes wrong, it leaves a real mess. Trying to determine a priori whether you can use zip files as the file system for http.FileServer is a challenge worthy of a bad technical interview. A type system that actually enforced required methods instead of using introspection to find Seek methods in objects would have identified the problem at compile time. Not even a better type system, just an accurate type signature.

The use of Seek doesn’t even seem necessary. The fs.File interface implements Stat, which can be used to obtain the size. And other code paths, where you simply write the http response, buffer and sniff the content type. Range requests might fail without Seek, but they’re rare. Better some support for unseekable files than none.

I think this code was written to use os.File, with all the required methods, but the requirements should have been relaxed when making it more generic. Unfortunately, instead of using the abstract interface that it claims to, the code still relies on something closer to the original concrete types. Using Stat would be better for real files anyway, one system call vs two to seek and rewind to get the size, and then one read syscall vs two reads and seek in the write path if you buffer the sniff.

These adapters seem to be the result of go’s backwards compatibility promise. The http.Filesystem interface came first, but wasn’t what they wanted the fs.FS interface to be, so now there’s two interfaces. And a gap in between.

Posted 21 Apr 2022 08:17 by tedu Updated: 21 Apr 2022 08:17
Tagged: go programming