flak rss random

where do the bytes go?

Or perhaps more precisely, how do they get there? What happens when you call write?

trap

The write function in libc sets things up and makes the system call, which probably enters somewhere in locore.s like Xsyscall_meltdown, but eventually we end up in syscall in arch/amd64/amd64/trap.c. Or we might be in svc_handler in arch/arm64/arm64/syscall.c. But at this point, the code should look pretty similar. The tortured mechanics of the syscall ABI are more tedious than interesting.

    uvmexp.syscalls++;

    code = frame->tf_rax;
    args = (register_t *)&frame->tf_rdi;

    if (code <= 0 || code >= SYS_MAXSYSCALL)
        goto bad;
    callp = sysent + code;

    rval[0] = 0;
    rval[1] = 0;

    error = mi_syscall(p, code, callp, args, rval);

We check the syscall code to make sure it’s within bounds, do some minor accounting, and finally dive into the machine independent syscall handler, mi_syscall from sys/syscall_mi.h.

There’s a full page of code just checking for debug flags and trace points and stack validity and also pledge and pins. The bytes aren’t really going anywhere while this is happening.

mi_syscall prolog
static inline int
mi_syscall(struct proc *p, register_t code, const struct sysent *callp,
    register_t *argp, register_t retval[2])
{
    uint64_t tval;
    int lock = !(callp->sy_flags & SY_NOLOCK);
    int error, pledged;

    /* refresh the thread's cache of the process's creds */
    refreshcreds(p);

#ifdef SYSCALL_DEBUG
    KERNEL_LOCK();
    scdebug_call(p, code, argp);
    KERNEL_UNLOCK();
#endif
    TRACEPOINT(raw_syscalls, sys_enter, code, NULL);
#if NDT > 0
    DT_ENTER(syscall, code, callp->sy_argsize, argp);
#endif
#ifdef KTRACE
    if (KTRPOINT(p, KTR_SYSCALL)) {
        /* convert to mask, then include with code */
        ktrsyscall(p, code, callp->sy_argsize, argp);
    }
#endif

    /* SP must be within MAP_STACK space */
    if (!uvm_map_inentry(p, &p->p_spinentry, PROC_STACK(p),
        "[%s]%d/%d sp=%lx inside %lx-%lx: not MAP_STACK\n",
        uvm_map_inentry_sp, p->p_vmspace->vm_map.sserial))
        return (EPERM);

    if ((error = pin_check(p, code)))
        return (error);

    pledged = (p->p_p->ps_flags & PS_PLEDGE);
    if (pledged && (error = pledge_syscall(p, code, &tval))) {
        KERNEL_LOCK();
        error = pledge_fail(p, error, tval);
        KERNEL_UNLOCK();
        return (error);
    }


Finally, we’re ready to get down to business.

    error = (*callp->sy_call)(p, argp, retval);

See you in sys_write.

sys_write

Several “generic” syscalls live in kern/sys_generic.c. read, write, select, poll, ioctl. Not really generic when you think about it, but nevertheless.

sys_write doesn’t do a whole lot by itself. Just repacking the arguments to share code with sys_writev. Every system call has an args struct, which formats the register array. It’s accessed via the SCARG macro for hysterical reasons.

int
sys_write(struct proc *p, void *v, register_t *retval)
{
    struct sys_write_args /* {
        syscallarg(int) fd;
        syscallarg(const void *) buf;
        syscallarg(size_t) nbyte;
    } */ *uap = v;
    struct iovec iov;
    struct uio auio;
    
    iov.iov_base = (void *)SCARG(uap, buf);
    iov.iov_len = SCARG(uap, nbyte);
    if (iov.iov_len > SSIZE_MAX)
        return (EINVAL);

    auio.uio_iov = &iov;
    auio.uio_iovcnt = 1;
    auio.uio_resid = iov.iov_len;
    
    return (dofilewritev(p, SCARG(uap, fd), &auio, 0, retval));
}

Now that we’ve repacked userland arguments into an io vector, and that inside a user io request, we’re ready to proceed. Much like the functions before it, dofilewritev consists of more tracing and accounting code than “real” work. The interesting bits are probably getting the file for the file descriptor and making sure it’s writable, and jumping into the actual file op.

    if ((fp = fd_getfile_mode(fdp, fd, FWRITE)) == NULL)
        return (EBADF);

    error = (*fp->f_ops->fo_write)(fp, uio, flags);

We could go a number of places from here, but for a regular file, it’s going to be vn_write.

vn_write

There are lots of fileops structures in the kernel, but only one vnops, living in kern/vfs_vnops.c. This is used not just for regular files, but for anything that may be accessed via the filesystem, but not for sockets and such.

vn_write is going to check a bunch of flags and translate between regimes. We still haven’t done anything with the bytes. They’re in the pointer in the iov in the uio. Technically, still in userland, even.

    /* note: pwrite/pwritev are unaffected by O_APPEND */
    if (vp->v_type == VREG && (fp->f_flag & O_APPEND) &&
        (fflags & FO_POSITION) == 0)
        ioflag |= IO_APPEND;
    if (fp->f_flag & FNONBLOCK)
        ioflag |= IO_NDELAY;
    if ((fp->f_flag & FFSYNC) ||
        (vp->v_mount && (vp->v_mount->mnt_flag & MNT_SYNCHRONOUS)))
        ioflag |= IO_SYNC;

And, another indirection.

    error = VOP_WRITE(vp, uio, ioflag, cred);

VOP_WRITE is a fancy argument packer and wrapper for one more line living in kern/vfs_vops.c. The ancients whisper of a time when there was even more going on here.

    return ((vp->v_op->vop_write)(&a));

There are many vops, but we’re headed for ffs_write.

ffs_write

In contrast to a fileops struct, there are lots of functions packed into vops. ffs_vops in ufs/ffs/ffs_vnops.c provides some idea. We confirm our next stop is ffs_write.

ffs_vops
const struct vops ffs_vops = {
    .vop_lookup = ufs_lookup,
    .vop_create = ufs_create,
    .vop_mknod  = ufs_mknod,
    .vop_open   = ufs_open,
    .vop_close  = ufs_close,
    .vop_access = ufs_access,
    .vop_getattr    = ufs_getattr,
    .vop_setattr    = ufs_setattr,
    .vop_read   = ffs_read,
    .vop_write  = ffs_write,
    .vop_ioctl  = ufs_ioctl,
    .vop_kqfilter   = ufs_kqfilter,
    .vop_revoke = vop_generic_revoke,
    .vop_fsync  = ffs_fsync,
    .vop_remove = ufs_remove,
    .vop_link   = ufs_link,
    .vop_rename = ufs_rename,
    .vop_mkdir  = ufs_mkdir,
    .vop_rmdir  = ufs_rmdir,
    .vop_symlink    = ufs_symlink,
    .vop_readdir    = ufs_readdir,
    .vop_readlink   = ufs_readlink,
    .vop_abortop    = vop_generic_abortop,
    .vop_inactive   = ufs_inactive,
    .vop_reclaim    = ffs_reclaim,
    .vop_lock   = ufs_lock,
    .vop_unlock = ufs_unlock,
    .vop_bmap   = ufs_bmap,
    .vop_strategy   = ufs_strategy,
    .vop_print  = ufs_print,
    .vop_islocked   = ufs_islocked,
    .vop_pathconf   = ufs_pathconf,
    .vop_advlock    = ufs_advlock,
    .vop_bwrite = vop_generic_bwrite
};


It’s taken a while to get here, and there hasn’t been much to see apart from some error checking and mechanical translations, but that’s about to change. We’re going to make some decisions, consequential decisions, and even more exciting, the bytes are going to move. The declarations block alone is longer than some of the functions we’ve gone through. This is serious.

    struct vop_write_args *ap = v;
    struct vnode *vp;
    struct uio *uio;
    struct inode *ip;
    struct fs *fs;
    struct buf *bp;
    daddr_t lbn;
    off_t osize;
    int blkoffset, error, extended, flags, ioflag, size, xfersize;
    size_t resid;
    ssize_t overrun;

There’s some more checking of flags and sizes (elided), and now we’re at the work loop.

    for (error = 0; uio->uio_resid > 0;) {
        lbn = lblkno(fs, uio->uio_offset);
        blkoffset = blkoff(fs, uio->uio_offset);
        xfersize = fs->fs_bsize - blkoffset;
        if (uio->uio_resid < xfersize)
            xfersize = uio->uio_resid;
        if (fs->fs_bsize > xfersize)
            flags |= B_CLRBUF;
        else
            flags &= ~B_CLRBUF;

        if ((error = UFS_BUF_ALLOC(ip, uio->uio_offset, xfersize,
             ap->a_cred, flags, &bp)) != 0)
            break;
        if (uio->uio_offset + xfersize > DIP(ip, size)) {
            DIP_ASSIGN(ip, size, uio->uio_offset + xfersize);
            uvm_vnp_setsize(vp, DIP(ip, size));
            extended = 1;
        }
        (void)uvm_vnp_uncache(vp);

        size = blksize(fs, ip, lbn) - bp->b_resid;
        if (size < xfersize)
            xfersize = size;

        error = uiomove(bp->b_data + blkoffset, xfersize, uio);

There’s quite a bit going on here, but the short version is we’re setting up a reasonable transfer size, and getting a buffer from the cache for the correct disk location. That’s a pretty substantial side quest. The curious can review the allocation algorithms in ufs/ffs/ffs_balloc.c. For now, we’re trying to follow where the bytes go, but not how they know where to go.

Once the setup is done, it’s time for the long promised move. uiomove will copy the bytes from userland to the buffer.

uiomove

Back in kern/kern_subr.c we’re going to work through uiomove. This is a fancy enterprise grade memcpy function that’s used in many places. We started in write with only a single data pointer, but many other uses will include a more fully populated array of iovecs.

    while (n > 0) {
        iov = uio->uio_iov;
        cnt = iov->iov_len;
        if (cnt == 0) {
            KASSERT(uio->uio_iovcnt > 0);
            uio->uio_iov++;
            uio->uio_iovcnt--;
            continue;
        }
        if (cnt > n)
            cnt = n;
        switch (uio->uio_segflg) {

        case UIO_USERSPACE:
            sched_pause(preempt);
            if (uio->uio_rw == UIO_READ)
                error = copyout(cp, iov->iov_base, cnt);
            else
                error = copyin(iov->iov_base, cp, cnt);

One of the tricks supported by uiomove is copying from either userspace or kernel space. We’re copying from userspace this time, so we also do a little preemption check, so that enormous copies don’t jam up the system for too long. We are performing a write operation, so we’re going to call copyin to read the data. Read to write, write to read.

copyin lives down in the arch directory somewhere and does magic things with fault handlers. Maybe the page we’re trying to copy in has been swapped out, or maybe some joker gave us an invalid pointer. We may also need to make sure the userland address space is mapped, or SMAP is disabled, etc.

But today, all has gone well, and we’ve moved all the bytes. Hurray. Back up to ffs_write.

bdwrite

Picking up where we left up, the bytes are in the buffer, and it’s time to start the next phase of the write operation, writing.

        if (ioflag & IO_SYNC)
            (void)bwrite(bp);
        else if (xfersize + blkoffset == fs->fs_bsize) {
            bawrite(bp);
        } else
            bdwrite(bp);

We’ll assume it’s going to be bdwrite, a delayed write. Queue it up, but don’t wait for completion. The meat of bdwrite is this first block, from kern/vfs_bio.c. The comments clearly don’t match the actual order of operations here, but close enough.

    /*
     * If the block hasn't been seen before:
     *  (1) Mark it as having been seen,
     *  (2) Charge for the write.
     *  (3) Make sure it's on its vnode's correct block list,
     *  (4) If a buffer is rewritten, move it to end of dirty list
     */
    if (!ISSET(bp->b_flags, B_DELWRI)) {
        SET(bp->b_flags, B_DELWRI);
        s = splbio();
        buf_flip_dma(bp);
        reassignbuf(bp);
        splx(s);
        curproc->p_ru.ru_oublock++;     /* XXX */
    }

reassignbuf will call vn_syncer_add_to_worklist to get things lined up with a little pointer swizzling and swashbuckling. Back in bdwrite:

    /* The "write" is done, so mark and release the buffer. */
    CLR(bp->b_flags, B_NEEDCOMMIT);
    CLR(bp->b_flags, B_NOCACHE); /* Must cache delayed writes */
    SET(bp->b_flags, B_DONE);
    brelse(bp);

And that’s it. We’re done, per the comment. Well, not quite. Releasing the buffer in this context means giving up exclusive access, making it available for other processes, etc. The write has not yet taken place in the way that anyone expecting data permanence would be happy with. But after this, it’s a quick trip back up the call stack. We’ll be back in userland before you know it.

The bytes are now in the kernel.

syncer

Somewhere else, somewhen else, the syncer will run. We may also arrive at the same destination by calling fsync, but we’ll take the lazy road. The syncer_thread in kern/vfs_sync.c is a neverending loop, working its way through lists of dirty vnodes. The vnode for our file, and its associated buf containing our bytes, was queued up above.

The operative line here:

    (void) VOP_FSYNC(vp, p->p_ucred, MNT_LAZY, p);

So it really is fsync, after all. Back in kern/vfs_vops.c we see this is just another wrapper, and it’s back into ufs/ffs/ffs_vnops.c to look at ffs_fsync. There’s a loop here, and eventually we’ll come across the buf with the bytes, and then the operative lines become:

        if (passes > 0 || ap->a_waitfor != MNT_WAIT)
            (void) bawrite(bp);
        else if ((error = bwrite(bp)) != 0)
            return (error);

Well, this looks familiar. Spoiler alert, bawrite and bwrite are cousins, one function pointer removed, so we’ll just go down the bwrite rabbit hole.

bwrite

After all of the above, we are now in kern/vfs_bio.c in the bwrite function. Surely we are getting very close to seeing the bytes get written to wherever they’re going.

Hey, look, some more fixup code. We started off with a delayed write that became a real write, but maybe somebody else wants their real write to become a delayed write. Choices are endless in Kafka’s bufcache.

    async = ISSET(bp->b_flags, B_ASYNC);
    if (!async && mp && ISSET(mp->mnt_flag, MNT_ASYNC)) {
        /*
         * Don't convert writes from VND on async filesystems
         * that already have delayed writes in the upper layer.
         */
        if (!ISSET(bp->b_flags, B_NOCACHE)) {
            bdwrite(bp);
            return (0);
        }
    }

We skip over a fair bit more accounting++ code, and now for the serious business.

    VOP_STRATEGY(bp->b_vp, bp);

You can guess what this does.

    return ((vp->v_op->vop_strategy)(&a));

Time to vist a new file, ufs/ufs/ufs_vnops.c to have a look at ufs_strategy.

    vp = ip->i_devvp;
    bp->b_dev = vp->v_rdev;
    VOP_STRATEGY(vp, bp);

That’s right, it’s strategy all the way down. The top level strategy sends the buf to the filesystem, and the bottem level strategy is going to send the buf to the disk. The important thing to note here is that we have switched from the file vnode to the device vnode. And so now, on our next trip through VOP_STRATEGY we will go to spec_strategy.

spec_strategy

In kern/spec_vnops.c we see something we haven’t seen before.

int
spec_strategy(void *v)
{
    struct vop_strategy_args *ap = v;
    struct buf *bp = ap->a_bp;
    int maj = major(bp->b_dev);

    (*bdevsw[maj].d_strategy)(bp);
    return (0);
}

Instead of having the vtable pointer live in the object, it’s a global vtable accessed by index. Not really that different, just a little variety.

There are many devices, but this is OpenBSD, so our next stop can only be one place: SCSI.

sdstrategy

Prepare yourself for a new level of abstraction.

Over in scsi/sd.c we enter sdstrategy. After the usual argument inspection, we perform two important operations.

    /* Place it in the queue of disk activities for this disk. */
    bufq_queue(&sc->sc_bufq, bp);

    /*
     * Tell the device to get going on the transfer if it's
     * not doing anything, otherwise just wait for completion
     */
    scsi_xsh_add(&sc->sc_xsh);

We’re going to put this buf on a queue for the disk, but not much will happen until we tell the device to get going.

Let’s take a look at some functions in scsi/scsi_base.c. scsi_xsh_add is going to put the disk on the link queue.

    mtx_enter(&link->pool->mtx);
    if (xsh->ioh.q_state == RUNQ_IDLE) {
        TAILQ_INSERT_TAIL(&link->queue, &xsh->ioh, q_entry);
        xsh->ioh.q_state = RUNQ_LINKQ;
        rv = 1;
    }
    mtx_leave(&link->pool->mtx);

    /* lets get some io up in the air */
    scsi_xsh_runqueue(link);

scsi_xsh_runqueue is a do while loop.

    do {
        runq = 0;

        mtx_enter(&link->pool->mtx);
        while (!ISSET(link->state, SDEV_S_DYING) &&
            link->pending < link->openings &&
            ((ioh = TAILQ_FIRST(&link->queue)) != NULL)) {
            link->pending++;

            TAILQ_REMOVE(&link->queue, ioh, q_entry);
            TAILQ_INSERT_TAIL(&link->pool->queue, ioh, q_entry);
            ioh->q_state = RUNQ_POOLQ;

            runq = 1;
        }
        mtx_leave(&link->pool->mtx);

        if (runq)
            scsi_iopool_run(link->pool);
    } while (!scsi_pending_finish(&link->pool->mtx, &link->running));

We’re just moving things around from one link queue to another, so we can call scsi_iopool_run and enter another do while loop.

   do {
        while (scsi_ioh_pending(iopl)) {
            io = scsi_iopool_get(iopl);
            if (io == NULL)
                break;
    
            ioh = scsi_ioh_deq(iopl);
            if (ioh == NULL) {
                scsi_iopool_put(iopl, io);
                break;
            }

            ioh->handler(ioh->cookie, io);
        }
    } while (!scsi_pending_finish(&iopl->mtx, &iopl->running));

It’s the ioh->handler call here that’s important. This is going to be scsi_xsh_ioh, which itself has one important line.

    xsh->handler(xs);

This finally resolves to sdstart. It’s somewhat more difficult to track down all these pointers. Unlike the vnops tables, you won’t find sdstart and friends all grouped together in one place.

Also important to mention that at this point, we may be in the syncer thread or not. Some other thread may be processing the queue, and the syncer will simply drop off some work. It’s a team effort.

sdstart

Why the focus on SCSI? Because on OpenBSD, that’s all there is. We’re working our way down to the nvme driver, I promise, but have to go through SCSI to get there. If you encrypt your disk with softraid, that’s SCSI. If you’re using a USB drive, that’s SCSI. The SATA drive in your somewhat older laptop? Unless it’s positively ancient, that’s ahci, and yup, it’s going to show up as SCSI. The MMC on that weird octeon gizmo in the closet? SCSI. We might be able to bypass this layer if we assume an ISA floppy, but I’d rather not.

Now that we’re back in scsi/sd.c things should be a little easier to follow. We’re going to give some consideration to the bytes again, instead of endlessly passing them around. sdstart is going to grab the buf off the bufq where we stashed it so long ago.

    bp = bufq_dequeue(&sc->sc_bufq);
    if (bp == NULL) {
        scsi_xs_put(xs);
        return;
    }
    read = ISSET(bp->b_flags, B_READ);

    SET(xs->flags, (read ? SCSI_DATA_IN : SCSI_DATA_OUT));
    xs->timeout = 60000;
    xs->data = bp->b_data;
    xs->datalen = bp->b_bcount;
    xs->done = sd_buf_done;
    xs->cookie = bp;
    xs->bp = bp;

    p = &sc->sc_dk.dk_label->d_partitions[DISKPART(bp->b_dev)];
    secno = DL_GETPOFFSET(p) + DL_BLKTOSEC(sc->sc_dk.dk_label, bp->b_blkno);
    nsecs = howmany(bp->b_bcount, sc->sc_dk.dk_label->d_secsize);

And now we’re going to setup a transfer by copying lots of values, but also perform some math. We’re getting closer to the hardware and need to know which sectors and how many. There’s some code to pick the correct command for the transfer (large sector numbers require bigger commands). And then we go.

    scsi_xs_exec(xs);

Don’t worry, scsi_xs_exec is easy to follow. Just chase the pointers.

    xs->sc_link->bus->sb_adapter->scsi_cmd(xs);

We are now leaving the world of abstraction. Next stop: nvme_scsi_cmd.

nvme

Time for a new file in a new subdirectory: dev/ic/nvme.c. The PCI attachment code is separate in dev/pci/nvme_pci.c but we are interested in the bus independent device operation. There will be no discussion of Apple devices today. We enter in nvme_scsi_cmd.

    switch (xs->cmd.opcode) {
    case READ_COMMAND:
    case READ_10:
    case READ_12:
    case READ_16:
        nvme_scsi_io(xs, SCSI_DATA_IN);
        return;
    case WRITE_COMMAND:
    case WRITE_10:
    case WRITE_12:
    case WRITE_16:
        nvme_scsi_io(xs, SCSI_DATA_OUT);
        return;

One more step down into nvme_scsi_io.

    struct scsi_link *link = xs->sc_link;
    struct nvme_softc *sc = link->bus->sb_adapter_softc;
    struct nvme_ccb *ccb = xs->io;
    bus_dmamap_t dmap = ccb->ccb_dmamap;
    int i;

    if ((xs->flags & (SCSI_DATA_IN|SCSI_DATA_OUT)) != dir)
        goto stuffup;

    ccb->ccb_done = nvme_scsi_io_done;
    ccb->ccb_cookie = xs;
    
    if (bus_dmamap_load(sc->sc_dmat, dmap,
        xs->data, xs->datalen, NULL, ISSET(xs->flags, SCSI_NOSLEEP) ?
        BUS_DMA_NOWAIT : BUS_DMA_WAITOK) != 0)
        goto stuffup;
        
    bus_dmamap_sync(sc->sc_dmat, dmap, 0, dmap->dm_mapsize,
        ISSET(xs->flags, SCSI_DATA_IN) ?
        BUS_DMASYNC_PREREAD : BUS_DMASYNC_PREWRITE);

If we study the bus_dmama_load arguments carefully, we’ll see a reference to xs->data. There’s the bytes again. We haven’t gotten totally lost. We don’t need to copy the bytes, but we need to make sure there’s an appropriate IOMMU mapping for DMA to succeed.

The bytes are ready for the device, but it still needs a little push. Continuing on we see it.

    nvme_q_submit(sc, sc->sc_q, ccb, nvme_scsi_io_fill);

Here’s the body of nvme_q_submit.

    tail = sc->sc_ops->op_sq_enter(sc, q, ccb);

    sqe += tail;

    bus_dmamap_sync(sc->sc_dmat, NVME_DMA_MAP(q->q_sq_dmamem),
        sizeof(*sqe) * tail, sizeof(*sqe), BUS_DMASYNC_POSTWRITE);
    memset(sqe, 0, sizeof(*sqe));
    (*fill)(sc, ccb, sqe);
    sqe->cid = ccb->ccb_id;
    bus_dmamap_sync(sc->sc_dmat, NVME_DMA_MAP(q->q_sq_dmamem),
        sizeof(*sqe) * tail, sizeof(*sqe), BUS_DMASYNC_PREWRITE);

    sc->sc_ops->op_sq_leave(sc, q, ccb);

More DMA mapping, this time of the NVME command ring. I’m skipping over the implementation of the bus_dmamap functions because we really might get lost. We’re very nearly done, but we’ve come this far, so let’s see what that fill function is.

nvme_scsi_io_fill is going to convert the block addresses for the device.

    scsi_cmd_rw_decode(&xs->cmd, &lba, &blocks);

    sqe->opcode = ISSET(xs->flags, SCSI_DATA_IN) ?
        NVM_CMD_READ : NVM_CMD_WRITE;
    htolem32(&sqe->nsid, link->target);

    htolem64(&sqe->entry.prp[0], dmap->dm_segs[0].ds_addr);

    htolem64(&sqe->slba, lba);
    htolem16(&sqe->nlb, blocks - 1);

With that, everything is in place. op_sq_leave above will eventually call nvme_write4 which will update the on device register to include the new queue entry.

And now we wait. The drive will write the bytes when it’s ready and in accordance with the secret desires of its firmware. We have once again reached the bottom of the call stack and it’s nothing but returns back up.

aftermath

The bytes have been written, but how do we know this? I’ll just sketch out the callbacks that indicate completion.

nvme_intr is called when the device signals an interrupt, nvme_q_complete will call nvme_scsi_io_done calls scsi_done until we end up in sd_buf_done. This will call biodone which will calls wakeup. If somebody was waiting for this write to finish, such as in the case of calling fsync, they will be over at the tsleep_nsec in biowait which returns into bwrite and back up from there.

recap

We started in the write system call. After passing through some function pointers specific to the type of file and file system, we copied the bytes into the buffer cache. Later, the syncer will push the buf down into the SCSI layer, which will translate the buf into a SCSI cmd before it reaches the NVME driver, setting up the actual DMA transfer.

Posted 29 Mar 2025 10:38 by tedu Updated: 29 Mar 2025 10:38
Tagged: openbsd