flak rss random

rebound relationship with pledge

There are a few simple seeming functions in libc which turn out to be quite complicated under the hood. You call a function expecting a simple result, but then libc runs off and opens a bunch of sockets to remote servers and chats them up. This makes setting up a robust sandbox difficult and error prone. (Wait, what? Why is ls creating sockets???)

files

When everything is a file, then everything you do requires a file. And from the outside, at a certain distance, everything you do looks about the same. One obvious, but somewhat unorthodox, solution is to stop using files.

syslog was an early problem addressed. When something bad happens, it’s important to log it. We’d like this event to be logged even when /dev/log may not be available, such is a in a chroot or when a program has run out of file descriptors. Kind of sucks not to be able to log “can’t open any more files” because you can’t open the log. Sometimes a program can compensate by calling openlog, but some programs don’t know that they are syslog consumers. Every program compiled with stack smashing protection (every program) will use syslog to record the stack smash. Modifying every program to ensure this works reliably is clearly untenable. Instead, a new system call sendsyslog was introduced which does not require opening any files.

A little farther back, getentropy was introduced for similar reasons, improving the reliability of arc4random.

Returning to the subject at hand, getaddrinfo and all its many friends that implement DNS resolution are among those function which open sockets. This may seem inconsequential at first, but there are a number of programs which we may not want talking back to the outside world, exfiltrating our data. tcpdump is a solid example. It’s had a history of parsing errors, so it’s a clear target for an attack. Especially when reviewing previously recorded packet dumps, however, it should have no need for sockets. Except for DNS.

rebound

I’ve been wishing for a local DNS cache for some time. As have others. There are even patches to integrate unbound with dhclient. But this has never panned out. In part, unbound is a large daemon, and it does a lot of things. It’s included in base, but not enabled by default. Promoting it to not just default enabled, but default required, would mean suddenly a lot more users would have to acquaint themselves with all of its many options. Stall.

Approaching this problem from the other side, pledge (née tame) offers a “dns” request which was initially more or less an alias for “inet”. But if the kernel were to have a little more insight into what a DNS socket looked like, it could lock down things a bit more. And now we have another reason to always run a local recursive server. The end user stub resolvers would talk to this server via known channels, and then only this one server would be granted outbound socket access.

Enter rebound, a simple DNS proxy. rebound (currently) listens on localhost:53 just like any other DNS server. Instead of answering queries, however, it simply forwards them to another server. rebound’s understanding of DNS packets is actually quite limited.

Following the game plan started with sendsyslog, two new syscalls dnssocket and dnsconnect were added and then libc converted to use them. So far, everything works more or less the way it used to, in large part to make transitioning easy. In the future, further refinement and restriction of the interface may be possible. And of course, as my punishment for describing a work in progress, it changes again.

internals

A few notes about how rebound works internally. Unlike a true recursive DNS cache, rebound is only a proxy. It is not capable of answering requests itself. On the bright side, not parsing DNS packets means there cannot be bugs in the parsing code. Har.

There’s a parent process which mostly hangs around, and a forked worker process. The worker process drops as many privileges as possible and then loops handling requests. The parent process really only exists to handle HUP signals, which require reopening the config file. rebound doesn’t use the same privsep framework as several other OpenBSD daemons because there’s very no need for privileged operations later.

Most DNS requests are single UDP packets. This makes proxying them very simple. Read one packet with recv from the local socket. Write one packet with send on the upstream socket. Read one reply. Write one reply. We don’t even need to look at the packet contents unless we’re interested. The TCP case is only slightly more complicated because we may need to make multiple read/write calls per request. Fortunately, socket splicing pushes all of that work into the kernel; we only need to handle initiating the connection.

Simple caching is very easy to add. Memorize the request, then remember the response. If we see an identical request repeated, send the matching response. We don’t want to serve stale requests, so a fairly short 10 second timeout is used. Technically, some investigation of the TTL field should be used, but in reality, 10 seconds is already much shorter than the minimum TTL imposed by many other servers. TCP isn’t cached at all. Many of the heaviest resolver uses (including tcpdump as it happens) use their own caching layer, too, so the rebound cache probably won’t see much utilization, but it can always get better over time.

The one wrinkle that rebound does is to randomize ID numbers using a sliding shuffle technique. In theory, this should work just a wee bit better than the partitioning technique used by libc. There’s only one rebound daemon running, so the higher setup cost is justified. Mostly just a proof of concept that we are capable of inspecting and modifying packets if necessary.

future

rebound is more than a placeholder, but the current architecture was explicitly designed so that it could be a drop in replacement for people using unbound. Or vice versa, that people using unbound can continue to do so. Which is why it speaks DNS over UDP sockets. But perhaps eventually more of libc will move into rebound so that individual programs are less responsible.

As alluded in the first paragraph, YP systems have many of the same issues. Maybe rebound, or something like it, will learn a bit about that too.

Solaris had a concept of doors, which programs used to communicate with system daemons. It solves some of the same issues, but as a generic mechanism, is itself pretty complex.

Posted 19 Oct 2015 15:36 by tedu Updated: 20 Oct 2015 18:56
Tagged: openbsd software