flak rss random

books chapter nine

From card loaders to virtual servers.


Guy Steele can program in all the languages, but has no intuition whatsoever for infinite dimensional Banach spaces. Having learned to program on machines with only thousands of words of memory in the complete system, he too laments that you can no longer understand everything that’s happening in a computer. He mentions studying the disk routines, and while I can read the source, there’s quite a bit of it (even in OpenBSD, which has a really freaking small SCSI stack compared to other systems) and even below that, my SSD has more firmware than his first computer had memory. On the other hand, I don’t miss having to write a card loading routine that fits onto a single card and can only use instructions with 12 bit opcodes because that’s all that the card has room for, even though the computer has 16 bit instructions.

Speaking of ancient history, Guy learned by reading all the back issues of CACM, back when, as he puts it, it had “real technical content and was well worth reading.” Lots of issues had short articles about better hash functions or bit flipping tricks. Today we’d use blogs, but there’s a great many more of them, and the curation leaves something to be desired. He read 15 years of journal by reading about an hour a day; imagine reading 15 years of blogs one hour at a time.

Guy was given the task of adding a cross platform filesystem API to Maclisp, which he did on a sort of working vacation at the summer cabin. Take six operating system manuals and a notebook out into the woods, come back a week later with the necessary code for each system. “It’s not as if I could Google something.” I think there’s a lesson here for developers, about getting free of the distractions of sitting by a terminal all the time, but also a test for system documentation. Given a task description, can I find and collect all the needed documentation and disappear, and then complete the task using only what I brought with me. We seem to be drifting away from a world where offline programming is even possible.

For a short while, Guy was the TECO wrangler. Everybody in the lab had a custom set of macros and keybindings, and nobody could use anybody else’s terminal. So he went around and asked everybody what features they needed, then came up with a single set of macros that would work for everyone.

For reference, Guy gave a talk at OOPSLA called “Growing a Language” about language design and evolution. Paper version. The short version is that you should be prepared for evolution. He contrasts the evolution of Scheme and Lisp as including only what everyone approves vs including anything that most people approve. When you sit on the admissions council, do you veto everything you don’t like or just ignore it with the understanding that you also have features to include that aren’t universally liked. Guy seems to prefer the all inclusive approach, though I’m not sure I do.

On program correctness, be careful what you wish for. Guy gives an example of a function which sorts an address book. There’s a test which verifies the output is sorted to prove it works. Unfortunately, it works by deleting every entry except the first. The result is certainly sorted! A true test would specify that the output contains all the same entries as the input, etc., and that’s quite a bit more work. Come to think of it, I’ve seen exactly this in the wild. Not the data deleting sort function, but the weak test function. Scanning an array to verify entries are in order is insufficient. Something to think about. A lot of testing seems to verify trivial properties instead of correct operation.

Guy was debugging some numerical routines. They were widely used, and yet there was a bug. He correctly concluded that if the bug was rare, it must be in the rarely executed parts of the code. He fixed it, only to get another report of incorrect results a week later. There was another bug in the same spot.

If Guy had a time machine, he’d go back and teach people not to use their thumbs for counting.


It’s mailing list week.

Mark Fletcher founded ONElist, an email list service, and then Bloglines, a news aggregator. I guess they were a big deal, but I can’t recall having heard of either. When they were acquired (by Yahoo and Ask, respectively), ONElist had a million users and no press while Bloglines had few users and tons of press. Apparently Bloglines was in WSJ on a weekly basis? Like other founders, they grew out of personal projects. Mark was tired of manually clicking a hundred links on this bookmarks page.

Observation regarding acquisition: the price you get for a web company has no connection to any assets. It’s all about the eyeballs. And so in a way, spending money to buy physical goods like servers is a waste because that doesn’t have any effect on your exit value. Kinda sad I think. Despite Fletcher’s advice to use virtual servers for everything, he relates a story where ONElist was growing too fast for their host, who was struggling to keep the database up. It’s easy for a vendor to promise the world, but it’s just as easy for a customer to outstrip their ability to deliver.

Next up is that other email list guy, Craig Newmark. Here’s a man who values simplicity. “I was, in fact, using Pine as my database tool until late ‘99, at which point we switched to MySQL.” I think more people should investigate the possibility of using pine as a nosql database.

Craigslist stands in remarkable contrast to every other startup, but the interview with Craig isn’t too exciting by itself. Anybody who’s been to the site knows how it works. You can get a lot done, and serve a lot of users, by keeping things simple. And if you have a small staff, you don’t need to sell tons of banner ads.


The documentary hypothesis is that every project requires some important documents, so you’d better have them. Estimates and forecasts sound like boring MBA work, but the point is to force technical decisions to happen early instead of delaying them. He relates a story about a computer where every six months they reversed a decision about cost vs performance tradeoff. (Whether the instruction counter lived in transistors or memory, back when that was a decision one could make.) So documentation can’t by itself save you from changing direction, but hopefully at least you’re aware of what’s happening.

Documentation also serves to identify miscommunication and hidden assumptions. I say “let’s get burritos” and you say “best burritos for sure” and then an hour later I’m at Bobby’s Burritos and you’re at Billy’s Burritos because we both agree that we obviously want the best burritos, but didn’t realize we don’t agree where that is. That’s a short chapter, let’s throw it away.

The next chapter up is about planning to throw one away. When chemical engineers build large factories, they start by building a smaller factory to verify that the industrial processes work outside the lab, but even so the pilot plant isn’t designed for full scale production. You learn what works and what doesn’t by building it, then you take that knowledge and use it to build the real thing. And so it goes for software. Brooks’s law is plan to throw one away; you will, anyhow.

We return to the subject of corporate management, and the importance of keeping talented people on the technical track instead of moving into management. A big part of this is maintaining prestige among the technical positions. Moving to management should not be considered a promotion. We also revisit the surgical team idea. By organizing and attaching support staff to the head programmer, we make clear that this is an important role. I thought this was a really great point. Typically the support staff all report to the manager, and thus power collects there.

Nothing is ever done. After a program ships, it goes into maintenance mode where bugs are fixed and missing features added. Typically this is done by less experienced programmers, however, leading to the phenomenon of two steps forward and one step back. Every bug fixed has a chance of introducing a new bug. There’s rarely anybody in charge of the whole operation, and so these little repairs slowly wear down the overall structure of the design. Eventually we are reduced to a holding pattern of one step forward and one step back. Time to throw it away. As Pascal (the thinker, not the language) put it, “Things are always at their best in the beginning.”


Don’t just code; think.

31. Programming by coincidence leads to inevitable failure. It’s pretty easy to get a program to work for certain inputs and stop there, without actually understanding how it works. Of course, this means that it probably doesn’t work. Not always. If we keep building on top of this unstable foundation, we’re in for some bad times.

32. Estimate your algorithms. Learn the big O and use it. Curiously, they mention that quicksort is “technically” n^2, but mostly ignore the consequences of this, although the study of adversarial inputs is somewhat more recent. I’d add that estimating how well things should work isn’t done until we’ve estimated how poorly things can work.


In previous chapters, we’ve been working with 8 bit adders and such. Why 8? Well, if you’ve been around computers before, you may know that bytes are 8 bits (at least today). So our little toys are relevant to real computers. But why 8? Well, 256 values is enough to store the letters of most alphabets, or create enough gray scale for images to look nice. Also because it’s handy for BCD, whatever that is which we’re going to ignore until a much later chapter. But really, why 8?

This kind of crosses into a discussion of half bytes, nibbles. And we revisit octal, that funny number system that doesn’t have thumbs. Alas, if you try to express a 16 bit number in octal, you can’t use string concatenation. How do we turn 0263 and 0305 into 0131705? Unfortunately octal doesn’t split a byte into uniform groups of bits. And thus, the nibble.

To express a nibble, we need base 16 numbers. Enter the hexadecimal. Only trouble is we only have ten digits, so we’re going to have to invent some more: hat, football, donut, cat, moon, knife. Most of the time we’ll just use shorthand, however: A, B, C, D, E, F. Now we can write numbers like 9A48Ch, using the blasphemous h suffix.

At last we have a convenient means of expressing byte values. We can string together any number of hex digits, where each byte is two hex digits. To return to the question of why 8 bits, this works out quite well, but doesn’t really answer the question. We could have used 9 bit bytes and expressed every byte as three octal digits. What a world that would be.

We end the chapter with another hint that 99h could actually be 99 in decimal. But this madness will wait until a later chapter.


Thumbs, man. What are they even good for? Base ten numbers are just the worst.

Posted 18 Aug 2017 17:13 by tedu Updated: 18 Aug 2017 17:13
Tagged: bookreview