As mentioned before, I use sbuild and gbp to build packages for debian. Since I first started out, this has developed some and now runs on a different machine and uses zfs!

I’ve been building a private repo for our packet radio project to ensure that users on Raspberry Pi and other platforms have the most recent code backported. It’s very frustrating as a user (and as one of the more experienced heads in the community) for people to be approaching a task with software that’s 3+ years out of date at this time. I believe Raspi Foundation are, at the time of writing, about to release a new revision of their OS. This’ll be great for users but it makes things a moving target for me!

The repo supports Raspi OS Stable (11/bullseye, armhf, arm64){note - I shall be referring to both raspbian armhf and Raspi OS arm64 as RaspiOS going forward}, Debian Stable (12/bookworm) & Unstable (amd64) and Ubuntu 22.04 LTS (amd64). Luckily, the differences between releases isn’t too great - Raspi OS is the outlier as it uses debhelper 12 instead of 13 across the board on the other systems.

I’ve not had to consider ‘build systems’ much before, as all my debian work has been for a common architecture. I began to poke into this in the sbuild post, but since then everything has been thrown out and redesigned.

Build Machine


The hardware is:

hibby@fennec ~> neofetch
       _,met$$$$$gg.          hibby@fennec
    ,g$$$$$$$$$$$$$$$P.       ------------
  ,g$$P"     """Y$$.".        OS: Debian GNU/Linux trixie/sid x86_64
 ,$$P'              `$$$.     Host: MS-7C37 1.0
',$$P       ,ggs.     `$$b:   Kernel: 6.4.0-3-amd64
`d$$'     ,$P"'   .    $$$    Uptime: 4 mins
 $$P      d$'     ,    $$P    Packages: 4628 (dpkg)
 $$:      $$.   -    ,d$$'    Shell: fish 3.6.1
 $$;      Y$b._   _,d$P'      Resolution: 1920x1080
 Y$$.    `.`"Y$$$$P"'         Terminal: /dev/pts/0
 `$$b      "-.__              CPU: AMD Ryzen 7 3700X (16) @ 3.600GHz
  `Y$$                        GPU: AMD ATI Radeon RX 470/480/570/570X/580/580X/
   `Y$$.                      Memory: 1145MiB / 32039MiB

I run 2 NVMe drives - one is ext4, where / lives, and my second storage device is zfs, where all the magic happens!

I know the Ryzen 7 3700x is a little old hat now, but it still feels fast. A future upgrade will be to the last Socket AM4 CPU when the second hand market is a little more favourable - a 5950X or a 5800X3D. I’ll improve cooling too, I fancy one of those AIO liquid coolers for no reason other than it looks kinda cool.

Remote Access


I don’t actually sit at my desk much, so everything happens over SSH. This is code for I like to do all my work on my macbook in the hackerspace or at the dining room table.

I remotely power on/off the desktop using a Tapo smart plug - the bios in my desktop is set to boot when power is applied.

I had to disable gdm3 from sleeping on inactivity as this makes the desktop frustratingly unavailable:

hibby@fennec ~> cat /etc/gdm3/greeter.dconf-defaults
[snipped out other stuff]
# Automatic suspend
# =================
# - Time inactive in seconds before suspending with AC power
#   1200=20 minutes, 0=never
# sleep-inactive-ac-timeout=1200
# - What to do after sleep-inactive-ac-timeout
#   'blank', 'suspend', 'shutdown', 'hibernate', 'interactive' or 'nothing'
# - As above but when on battery
# sleep-inactive-battery-timeout=1200
# sleep-inactive-battery-type='suspend'


I use Tailscale for remote access to all my machines and I really can’t say enough good things about this software.

It has simplified my remote access life significantly and I recommend everyone tries it.

On my mac i use iTerm2 and its tmux integration is incredible. Again, it’s made my life better and I recommend everyone who can tries it.


I am GPG Forwarding from my yubikey per the drduh guide so I can sign code and commits.

This is finnicky and fiddly to set up - at the moment for reasons unknown to me, every time I reboot and ssh in I need to rm /run/user/1000/gnupg/S.gpg-agent and then kill tmux and log in again. Then gpg forwarding works fine.

Similarly, for forwarding the ssh socket (So I can git push/pull) I need to

hibby@fennec ~> set -x SSH_AUTH_SOCK /tmp/ssh-4Mh8aMtE1k/agent.3859

or similar. I use set -x as i run Fish as my primary interactive shell. It’s very usable and has the best website.

This part works for the moment, but I think I need to reassess it. I probably need to replace my gpg key anyway, it’s quite old now and I could do with more bits. Answers on a postcard or via Mastodon please.

Breaking Debian

Some of the software as it’s packaged breaks Debian standards - the things that aren’t in debian proper go to /opt/oarc/*, specifically linbpq, QtSoundModem and QtTCPTerm. The latter two are going to move soon as I’ll package them properly for Debian, but watch out! I made this decision a while ago and I’m not sure if it was the right one. For linbpq it makes sense as the binary runs the config file adjacent to it and stores all its data beside it too.

Build System


My git repos for the project follow the DEP-14 standard more or less - see github for an exmaple of how I lay this out.

I have deviated slightly for my own sanity, and branches are named after codenames, debian/bookworm, ubuntu/jammy etc. As these branches won’t make it back to, I’m okay with this.

My ~/.gitconfig is:

hibby@fennec ~> cat .gitconfig
	email =
	name = Dave Hibberd
	signingkey = [GPG KEY]
	editor = vim
	gpgsign = true
	rebase = false

I think I’m beginning to regret that rebase = false.


sbuild supports ZFS Snapshots now, which suits me perfectly. I know zfs better than the btrfs setup I was using previously!

I have made only one very slight change to the default ~/.sbuildrc, and that is to disable lintian by default - it doubles build time every timem, and when doing a build for everything I don’t feel I need this.

I can still run it during development.

hibby@fennec ~> cat .sbuildrc | grep lintian
$run_lintian = 0;

I could probably be doing a lot more clever stuff around sbuild but I’m not and everything seems stable


I run all my work in a zfs filesystem, tank/chroot mounted at /chroot. I’m probably breaking Linux Filesystem Law but I’m ok with that, this is quick to access and works nicely with my brain.

I named my zpool tank as that’s probably what the first-time documentation said to do and I can be an idiot sometimes!

Each OS/Distro is held in a distinct ZFS filesystem under tank/chroot with a descriptive name - for example, tank/chroot/raspbian-bullseye-arm64 which is presented to the os as /chroot/raspbian-bullseye-arm64/. These names mirror the expectation that sbuild has around chroots, in the man file it defines a naming scheme of $distribution-$arch‐sbuild or $distribution-arch. I deviate from that, but the config file is what’s important as it defines the names. I like directories to be named neatly in line with that.

Before I do any build work in a new OS, I create then monunt the file system with

zfs create tank/chroot/raspbian-bullseye-arm64

zfs mount tank/chroot/raspbian-bullseye-arm64

What i find compelling about running this with zfs (and btrfs) is that sbuild/schroot will take a temporary ‘snapshot’ of the chroot, perform the build in the snapshot and then get rid of it straight away. It’s a little faster than doing things directly on the live chroot, and keeps the source chroot somewhat ‘immutable’.

Local apt cache and eatmydata

I use AptCacherNg to act as a local package cache for my builds. This is a proxy that holds a cache of the latest packages from repos and cuts down build time - while I’ve got a gigabit internet conneciton, nothing is faster than pulling from local files!

You’ll see in all my chroots I have localhost:3142/<distro> - this is the apt-cacher proxy.

I use eatmydata also which transparently disable fsync() and other data-to-disk synchronization calls - again, this makes for faster builds. Nobody’s got time to wait or waste in a build that takes 15 seconds!


I create a new sbuild chroot using

sbuild-createchroot --arch=amd64 --include=debhelper,eatmydata --components=main bookworm /chroot/debian-bookworm-amd64/ http://localhost:3142/debian

This runs debootstrap, and fills my folder with elements of the distro, architecture and installs both debhelper (so it’s not installed every time) and eatmydata. It also defines my repo proxy, aptcacherNG. This treats the zfs filesystem as just a normal folder, so I need to modify the config file createdi by sbuild-createchroot to make it work with zfs snapshots as follows:

hibby@fennec ~> cat /etc/schroot/chroot.d/bookworm-amd64-sbuild-V0C4cQ
description=Debian bookworm/amd64 autobuilder

Key entries to insert are zfs-snapshot and zfs-dataset, but also sublty important are command-prefix=eatmydata ensuring that everything is prefixed with that and the aliases, which when either sbuild or gbp buildpackage are run will match the distro declared in debian/changelog and automatically select the correct chroot

Raspberry Pi OS 64bit

Unfortunately, I hit a curious snag in Raspberry Pi OS - it uses 2 repositories instead of one. It appears that Raspi Foundation are shipping Debian arm64 as their primary source of packages with only a few customised ones at (notably, not raspbian).

cat /etc/apt/sources.list
deb bullseye main contrib non-free
deb bullseye-security main contrib non-free
deb bullseye-updates main contrib non-free
# Uncomment deb-src lines below then 'apt-get update' to enable 'apt-get source'
#deb-src bullseye main contrib non-free
#deb-src bullseye-security main contrib non-free
#deb-src bullseye-updates main contrib non-free

cat /etc/apt/sources.list.d/raspi.list 
deb bullseye main
# Uncomment line below then 'apt-get update' to enable 'apt-get source'
#deb-src bullseye main

To the best of my knowledge, debootstrap doesn’t like multiple repositories, so I had to get creative building this chroot - I found the tool mmdebstrap which is similar to debootstrap in that you can build a debian chroot, but it uses apt to resolve dependencies and supports multiple mirrors/repos. It was a delight to discover and I really like using it.

I modified my script to populate the chroot directory this way -

hibby@fennec ~> cat


sudo mmdebstrap --include=fakeroot,build-essential,debhelper,eatmydata \
    --architectures=${CHROOT_ARCH} ${VERSION} ${CHROOT_DIR} ${MIRROR} ${MIRROR2}
sudo cp /usr/bin/qemu-arm64-static ${CHROOT_DIR}/usr/bin/
sudo sbuild-createchroot --arch=${CHROOT_ARCH} --foreign --setup-only \

It negates the need for debootstrap second stage. You can see at the end I run sbuild-createchroot to set up the chroot to be included in sbuild-createchroot but critically it runs as --setup-only which means debootstrap isn’t run, it just imports the directory into schroot/sbuild world.

I still need to update the config as in the previous section, but mmdebstrap solved a lot of problems very quickly for me and I’ll probably use it in place of debootstrap going forward.

Package Preparation

To indicate that I want to target Raspbian, ubuntu, etc at the moment I am releasing the package for the target distro name, which involves setting the distro in debian/copyright to be bullsye or jammy where it would normally read unstable or UNRELEASED.

To start the process, I run git merge debian/latest in my distro specific branch. for the most part, it merges cleanly but the copyright file is always contentious and I need to manually repair it before committing the merge. I tried rebase for this, but there was a lot of shouting and things were unhappy, so I think going forward I’ll preserve history in each branch.

Next up I run dch --bpo and modify the distro name (I change bookworm-backports to bookworm for example, or change it to bullseye and modify the revision string to read bpo11). If it’s for RasPi OS Bullseye I make sure that debhelper in debian/control is at 12, not 13.

There is probably a way I can automate this using sbuild, but for the most part I am releasing these packages as backports (debian revisions such as: qtsoundmodem (, statments made by the utterly deranged), so I generally have to modify the changelog file, and dch --bpo isn’t hard to run.


For the amd64 builds, I can simply run gbp buildpackage to spin up sbuild and perform the build.

For the crossbuilds, this doens’t work, so I run sbuild --host=amd64 --arch=amd64 to make the crossbuild run, varying the architecture as required.

The different provides a little frustration - GBP outputs my built .deb etc to ../build-area/, but sbuild just dumps it in ../. Thinking of it, there’s probably a config for this but I’ve been too busy ‘doing’ to stop and think.

I have pulled this together in a little bash script that just steps through my branches and builds in each one:

hibby@fennec ~> cat
distro=("debian/latest" "debian/bookworm" "ubuntu/jammy" "raspbian/bullseye")

for system in ${distro[@]}; do
	git checkout "$system"
	if [[ $system != "raspbian/bullseye" ]]; then
		git status
		echo "build $system"
		gbp buildpackage --git-ignore-branch
		git status
		echo "build arm64"
		sbuild --host=arm64 --build=arm64
		git status
		echo "build armhf"
		sbuild --host=armhf --build=armhf

It’s dirty but it builds everything in one step while I go and work on anything else.


Once I’ve built the software, I use reprepro to build a repo and it’s held in github, served by github pages - see repo and my info page for more!

I have made two big assumptions that allows me to get down to having a single repo address:

  • armhf users are only running Raspbian
  • arm64 users are only running Raspberry Pi OS

This might break some things, but in honesty if RaspiOS is just Debian arm64 with a sprinkling of alternative/additional packages, things should work fine. As for armhf, I think the number of packet radio users on an armhf platform that isn’t the raspberry pi running raspbian is such a vanishingly small niche that I’m not too stressed about excluding them.


So far this has been a rewarding project, and I now have users around the UK and around the world running software from this repo. I’ve learned a lot, it’s solidifed a lot of debian knowledge for me and as it has progressed I’ve got smarter and faster as I’ve been doing things.

Worth doing!