Creating Netboot images
It would be nice to have support for generating Net bootable images to keep them consistent between devices.
I have asked on the raspberry pi forum and got referred to this project.
I am interested in contributing to this effort if this is needed.
Thanks for following up. As I mentioned in my forum response, I've done PXE boot, but have not implemented Pi netbooting, although it doesn't look any more complex than what I've done in the past.
As I understand Pi netbooting, there are two aspects to consider:
- netboot IMG preparation
- infra config on a netboot server to provide the bits to a netbooting client system
Based on your pi forum note, it seemed that you were more concerned with the IMG prep part. It turns out that the IMG prep aspect is exactly what sdm is very good at.
So, from my perspective, I guess the starting point for this discussion should be around what's required to turn a fully-configured RasPiOS IMG into something that can be netbooted.
I can take a look into the netboot bits in more detail next week. If you have some docs you're working with on the netboot part, please share them with me so we can get on the same page.
If you want to familiarize yourself with sdm's capabilities, I'd encourage you to read through some of the docs. In particular, these might be helpful for you to review:
- Command details: https://github.com/gitbls/sdm/blob/master/Docs/Command-Details.md
- Plugin capabilities: https://github.com/gitbls/sdm/blob/master/Docs/Plugins.md
- Fully usable sdm script: https://github.com/gitbls/sdm/blob/master/ezsdm
I'm happy to help jump-start you in moving your project to sdm if you provide details of what you're doing now. I'll state without any understanding of your project and what you maintain in git: Once you've built an sdm script for your system your git should be a lot less complex, since you shouldn't need to maintain nearly as much in git.
My guess is that you'd only need to maintain:
- The downloaded IMG from downloads.raspberrypi.com (but you can always re-download it, so maybe not?)
- The assets you provide to sdm
- your sdm script (e.g., a modified ezsdm, or whatever you build to run sdm)
- Any data files required for use with sdm plugins, etc.
To give you an idea of what I mean by this, have a look at https://github.com/gitbls/sdm/blob/master/Docs/Cool-Things-You-Can-Do-VPN.md This page describes how to build both ends of a site-to-site VPN using sdm, including all the assets and scripts.
I would expect your needs to be similar in scope, although obviously for a completely different use case.
Hope this helps. Looking forward to working with you on this.
As far as it looks to me, your tool is amazing and does almost everything we are currently doing 😄. Great work!
Our Current Workflow
-
Download & extract the base image
-
Configure for network boot
-
boot/firmware/cmdline.txtselinux=0 \ dwc_otg.lpm_enable=0 \ console=tty1 \ rootwait rw \ nfsroot=192.168.0.12:/rpi ip=dhcp \ root=/dev/nfs \ systemd.log_level=info \ systemd.log_target=console \ systemd.debug-shell=1 -
/etc/fstabproc /proc proc defaults 0 0 192.168.0.12:/rpi / nfs defaults,noatime 0 1
-
-
Modify the system
- Chroot via QEMU to do basic setup (same as you do with your tool),
OR deploy on an actual Pi and commit changes from the real system.
- Chroot via QEMU to do basic setup (same as you do with your tool),
What Could Be Improved for Netboot
The work that could be done to make it much easier to use for netboot comes down to two things:
1. Automate the fstab and cmdline configuration
- Ideally, you’d only need to set the IP of the host machine (e.g.,
NETBOOT_SERVER=192.168.0.12), and the tool would generate bothfstabandcmdline.txtaccordingly. - It would also be great if you could specify custom mount points for additional folders like
/homeor/var/logs, which could be mounted from separate NFS paths or tmpfs depending on the use case. - Optionally, include templates for host-side configuration (e.g., NFS exports), though this is more of a nice-to-have.
2. Figure out the best practice for using the resulting image
There are a few possible strategies:
(1) Mount the image as read-only on the server and export it via network
- Mount the ISO or IMG file read-only on the netboot host, then export it over NFS.
- The Raspberry Pi boots directly from this exported filesystem.
- This ensures full consistency, since the image remains immutable and hashable.
- For writable parts like
/homeor/var/logs, you can define separate read-write mounts. - This is great since you can also perform integrity checks by simply computing a SHA sum on the ISO.
(2) Extract and run as read-write
- Extract the image contents and serve them as a read-write NFS root.
- This is convenient for development and testing, but may lead to inconsistencies over time.
(3) Mount the ISO with read-write overlay
- Use
overlayfsto combine a read-only ISO (lower layer) with a writable tmpfs or directory (upper layer). - This gives you the stability of an immutable base and flexibility for runtime modifications.
For example:
- If a file exists in the ISO (like
/etc/hostname) and the system writes to it, the change goes into the upper layer and masks the original — the ISO itself stays untouched. - You can track all changes in the upper layer, which is great for debugging, diffing, or even extracting modifications as a patch.
- Deleting a file just creates a whiteout marker, leaving the original in the ISO intact.
- You can reboot and return to a clean state automatically — ideal for stateless systems or reproducible dev setups.
I'm still evaluating the implications of each approach.
For deployment, a read-only setup that’s reliable and consistent would be ideal.
For development, a writable setup is helpful to quickly test changes and roll back easily.
It would also be great if sdm could be integrated into a CI pipeline to enable fully automated, versioned image generation.
Final Thoughts
The first solution would probably be the easiest to implement, and the second one is honestly my least favorite — but realistically, most setups will probably end up using option 2.
That said, offering solid support for option 3 (read-only image + overlayfs) and maybe some CI integration would be a brilliant addition to sdm, in my opinion.
As far as I’ve seen, you can run sdm using multiple individual commands via CLI — but is there also a way to define everything in one large config file?
It would be great if such a config file could also serve as the source of truth to generate the corresponding host-side configs (like NFS exports, mount points, etc.).
That way, the image and host could stay perfectly in sync with minimal duplication.
Here are our internal docs to setup the rpi. But the config files for the host are more interesting to you probably and they are attached:
tftp.service.j2.txt exports.j2.txt dhcpd.conf.j2.txt
exports is the nfs config.
Thanks for the details...They really help a lot! Here are some thoughts to generate feedback/discussion.
As you've surmised, img preparation is sdm's core function: Customize an IMG as little or as much as you want. Then, when you burn the IMG to a disk (or another disk IMG...more on this later) the host's identity and any host-specific customization is applied. When that disk is booted, it goes through the normal boot process and then the final bit of sdm's firstboot runs. This completes some very specific customizations, based on what customizations you've applied to the system.
These final bits are quite simple but must be done in the context of the booted system.
WRT your cited examples:
- configure cmdline.txt: sdm just got a
cmdlineplugin that lets you:- add elements to cmdline.txt
- delete elements from cmdline.txt
- completely replace cmdline.txt << Presumably this is what you'd use
- modify fstab: sdm's
systemplugin has anfstabargument that is a file of fstab lines to append to /etc/fstab. That takes care of most of your needs, although you would need to do a manual edit (automated using sed) to remove the PARTUUID lines and add the new line for the NFS-mounted root partition. This is all quite simple to do with a custom plugin (See https://github.com/gitbls/sdm/blob/master/Docs/Example-Plugin.md for an example. - modify other aspects of the IMG: As I mentioned, this is sdm's core competency. Whatever you want to do can be done. Some highly esoteric things might take a bit more work, but I haven't found anything yet that simply can't be done.
On the netboot improvements:
- Automating fstab and cmdline: Simple! Run a script before doing the sdm customization that configures the fstab extension datafile and cmdline the way you want.
- Custom mount points: I assume this can be done but have never tried it. From the sdm perspective, this would simply be another line or 3 in your custom fstab
- Host-side configuration for exports: The
systemplugin also has anexportsargument that appends the given file contents to /etc/exports
WRT best practices for using the resulting IMG: I just looked...it's been 14 years (!!!) since I last played with netbooting. I was booting X64 Linux and Windows back then, but the concepts are the same.
There have been tons written about how to do the various approaches for the PI, which you've undoubtedly found. Although this is beyond the scope of sdm, I'm happy to collaborate with you on this aspect as well. I've been thinking for quite a while that sdm needs a netboot server plugin so one could easily set up a Pi as one using sdm.
You asked "As far as I’ve seen, you can run sdm using multiple individual commands via CLI — but is there also a way to define everything in one large config file?" Yes, you can most definitely do that, although you'll still have a couple of separate files for, for instance, fstab, exports, etc. See https://github.com/gitbls/sdm/blob/master/ezsdm which runs sdm with a list of plugins. You'd obviously modify the plugins to be run, but tada! A single command. TBH I'm not sure what you mean by "using multiple individual CLI commands". The ideal way to use sdm is encapsulated in the ezsdm script. Of course, you can use sdm differently than that script does, I built ezsdm as a way to provide one-stop, encapsulated usage.
One last thing: I see that you're using isc-dhcp-server for your DHCP. You might want to take a look at https://github.com/gitbls/ndm It's a tool to configure named(bind) and isc-dhcp-server from a list of hosts that you configure via the command line. You can easily add pxe boot config to the hosts that you want to pxe boot. I still have a few hosts configured thusly but haven't PXE booted them for quite some time 😱.
Hope this helps! Based on what you've described, your use case is perfect for sdm, and I'll support you as much as possible.
Let's iterate on this until you either get comfortable with using it, or find the show-stopper that precludes you from using it (and is something that I can't fix for you).
Hello @gitbls,
That sounds like you are already feature complete on the IMG generation side.
Do I understand that correctly not all changes are done on the IMG but done but some are only done on first boot?
But other than this I think I will try to setup solution 3 then. A overlayfs solution would definitely be the ideal thing for our use.
I will definitely look into your DNS server suggestion.
The only thing that maybe open are maybe some nice tooling to help setup a pi for Netbooting faster and maybe some tooling to integrate it easily into ci Pipelines....
Maybe Netbooting would be a nice thing to add to your example collection.
Your tool is amazing 😄
But I think you are missing a pip/venv plugin. It would be great to give it a requirements.txt and it would setup a venv in the correct python version with these packages.
Otherwise I guess I can just do it with phase scripts but that does sound like something that a plugin would be relay nice to have for.
Hello @gitbls,
That sounds like you are already feature complete on the IMG generation side.
It's been feature complete for a couple of years, except for all the features that I keep adding 🙄
Do I understand that correctly not all changes are done on the IMG but done but some are only done on first boot?
There are some, which are clearly identifiable. I can put that list together later...it's not very big.
But other than this I think I will try to setup solution 3 then. A
overlayfssolution would definitely be the ideal thing for our use.
That struck me as the best technical approach,
I will definitely look into your DNS server suggestion.
The only thing that maybe open are maybe some nice tooling to help setup a pi for Netbooting faster and maybe some tooling to integrate it easily into ci Pipelines....
I'm open to suggestions as you work through your implementation.
Maybe Netbooting would be a nice thing to add to your example collection. Indeed! That is why I was interested in working with you. 👍
Your tool is amazing 😄
Thanks! I've been polishing it for a few years. It started out MUCH smaller and a lot less functional, but it's pretty cool now, which is why I keep working on it. 😄
But I think you are missing a pip/venv plugin. It would be great to give it a requirements.txt and it would setup a venv in the correct python version with these packages.
Amusingly I have a preliminary venv plugin that is basic but works. Doesn't have requirements.txt implemented yet, so a bit of polishing left to do.
Otherwise I guess I can just do it with phase scripts but that does sound like something that a plugin would be relay nice to have for.
Amusingly I have a preliminary venv plugin that is basic but works. Doesn't have requirements.txt implemented yet, so a bit of polishing left to do.
Any chance I could test a preview version?
Amusingly I have a preliminary venv plugin that is basic but works. Doesn't have requirements.txt implemented yet, so a bit of polishing left to do.
Any chance I could test a preview version?
I'll take a look at it's current state, see what is involved in adding requirements.txt and then let you know. I can get you something to fiddle with later this week.
Hi @gitbls,
I’ve identified what appears to be the only severe issue:
Background:
Our goal is to automate the generation and distribution of our base system image via CI/CD. In practice, that means:
- On each push to our GitHub repository, a pipeline spins up a Docker container.
- Inside that container, we build our Raspberry Pi netboot image using sdm.
- Once the build finishes, we upload the image to a storage bucket.
- Finally, each robot downloads and deploys the new image.
The primary challenge we face:
CI/CD runners cannot use --privileged mode or attach host devices. Because of this, creating a raw .img file inside the build container is hard. However, we actually don’t need the .img itself—we only need the filesystem contents.
Proposed Solution:
- My Docker downloads and extracts the Raspberry Pi image
- Runs sdm over the extracted directory tree
- My Docker packages the modified files
Is it possible to do that with sdm?
Have you looked at the --directory switch?
I would be VERY interested in your Docker usage if this works for you and you're able to wrestle Docker and sdm into working together. As you can see here people (including me, a confirmed lifelong Docker neophyte) are looking for a solution using Docker and sdm together.
Please LMK how this works for you, and 🤞🤞🤞you can help me with a more robust solution for Docker and sdm.
@gitbls, Thanks for the hint.
I got so far:
FROM ubuntu:noble
RUN apt update && apt install -y coreutils original-awk libarchive-tools wget parted xz-utils curl fdisk git file binfmt-support systemd binfmt-support gdisk keyboard-configuration parted qemu-user-static rsync systemd-container uuid parted 7zip
RUN git clone https://github.com/gitbls/sdm
RUN wget https://downloads.raspberrypi.com/raspios_lite_arm64/images/raspios_lite_arm64-2024-11-19/2024-11-19-raspios-bookworm-arm64-lite.img.xz && unxz 2024-11-19-raspios-bookworm-arm64-lite.img.xz
RUN 7z x 2024-11-19-raspios-bookworm-arm64-lite.img
RUN 7z x /1.img -snld -o/out
RUN 7z x /0.fat -snld -o/out/boot/firmware
RUN sdm/sdm --customize --sdmdir /root/sdm \
--plugin sshd:"password-authentication=no" \
--plugin disables:piwiz \
--chroot \
--directory /out
I am a bit confused about the error I am getting:
> [8/8] RUN sdm/sdm --customize --sdmdir /root/sdm --plugin sshd:"password-authentication=no" --plugin disables:piwiz --chroot --directory out:
0.207 mount: /mnt/sdm: permission denied.
0.207 dmesg(1) may have more information after failed mount system call.
0.209 ? Error mounting --bind 'out'
Why does it want to mount the out folder instead of just using it?
sdm needs to mount the directory somehow to operate on it. At the moment it does
mount --bind $dmimg $mpoint
Where $dmimg is (in this case) /out and $mpoint is (typically) /mnt/sdm
Looks like the directory /mnt/sdm does not exist in your container, but immediately before attempting the mount sdm creates it if it doesn't exist.
I'm going to claim lack of docker knowledge here. Why can't sdm create /mnt/sdm in the docker container?
I should add...sdm's structure requires that the IMG/device/directory be mounted to operate on it.
How can it mount the directory in a docker container?
The directory is already there it does not need to mount it anymore.
As you can see in my docker container it downloads the image and just extracts the files to the /out folder.
You can only do mounts within a container if you run --privileged or CAP_SYS_ADMIN, which is not supported in CI docker container.
If you do chroot into this folder it will be the same as chroot into a mounted .img file. Instead of mounting it I just extracted it. Since mounting is not always possible in docker.
Docker invokes functions on the host system which need to be supported by the host system for mounts. Therefore this --device or --privileged flag which I try to avoid.
The directory is already there it does not need to mount it anymore.
As you can see in my docker container it downloads the image and just extracts the files to the /out folder.
You can only do mounts within a container if you run --privileged or CAP_SYS_ADMIN, which is not supported in CI docker container.
I understand that the directory exists, but at the moment sdm is structured such that it requires the directory be mounted. Clearly it can be addressed (it's only code...) but I'll need to investigate to see how much pain it will be to address.
In the meantime, can you hack around it to continue your investigation?
@gitbls I patched a bit around and got a bit further:
--plugin disables:piwiz --chroot --directory /out
* Host Information
Hostname: 9f7dd98c4080
Memory: 28496052 kB
uname: Linux 9f7dd98c4080 6.14.5-300.fc42.x86_64 #1 SMP PREEMPT_DYNAMIC Fri May 2 14:16:46 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
os-release Name: Ubuntu 24.04.2 LTS
Version: 24.04.2 LTS (Noble Numbat) (noble)
Like OS: debian
sdm Version: V13.10
* IMG Information
Name: /out
Date: 2024-11-19
RasPiOS Version: 12
RasPiOS Architecture: 64-bit aarch64
os-release Version: 12 (bookworm)
% sdm will use qemu 'aarch64'
% Add and enable qemu binfmt for 'aarch64' processor architecture
% sdm will use chroot per --chroot on this 64-bit x86-64 host
* Plugins selected:
* sshd
Args: password-authentication=no
* disables
Args: piwiz
> Directory '/out' has 1513681276 1K-blocks (1550.0GB, 1443.6GiB) free
> Copy sdm to /root/sdm in the Directory
* Start Phase 0 image customization
> Run Plugins Phase '0'
> Run Plugin 'sshd' (/mnt/sdm/root/sdm/plugins/sshd) Phase 0 with arguments:
'password-authentication=no'
* Plugin sshd: Start Phase 0
> Plugin sshd: Keys/values found:
password-authentication: no
* Plugin sshd: Complete Phase 0
> Run Plugin 'disables' (/mnt/sdm/root/sdm/plugins/disables) Phase 0 with arguments: 'piwiz'
* Plugin disables: Start Phase 0
> Plugin disables: Keys/values found:
piwiz:
* Plugin disables: Complete Phase 0
* Phase 0 Completed
* Enter image '/out' for Phase 1
mount: /out/dev: permission denied.
dmesg(1) may have more information after failed mount system call.
mount: /out/dev/pts: permission denied.
dmesg(1) may have more information after failed mount system call.
mount: /out/proc: permission denied.
dmesg(1) may have more information after failed mount system call.
mount: /out/sys: permission denied.
dmesg(1) may have more information after failed mount system call.
chroot: failed to run command '/root/sdm/sdm-phase1': Exec format error
umount: /mnt/sdm/dev/pts: must be superuser to unmount.
umount: /mnt/sdm/dev: must be superuser to unmount.
umount: /mnt/sdm/proc: must be superuser to unmount.
umount: /mnt/sdm/sys: must be superuser to unmount.
Why do you mount dev pro and sys? Any specific use case or just because arch-chroot does it?
Why do you mount dev pro and sys? Any specific use case or just because arch-chroot does it?
Not sure what arch-chroot has to do with this?
When I added chroot support to sdm the examples I found did this, so I did as well
for fs in dev dev/pts proc sys ; do mount --bind /$fs $SDMPT/$fs ; done
If proc and sys aren't needed, I'm happy to remove them. My main concern, of course, is breaking something.
Looks like sdm doesn't need /dev, /dev/pts, /proc, and /sys. One small buglet found (df doesn't like it if /dev isn't there, but reports the correct results when requesting a single file system (e.g., df /), fixed with redirecting error to /dev/null.
Now looking at --directory issue.
@gitbls, I think I’ve concluded my Docker trials and came to the following conclusion:
What is possible using an unprivileged container with no devices mounted:
- Fake mounting of
.imgfiles is hard, maybe possible (using tools likedebugfs). - Extracting the RPi OS
.imgfile (using 7zip) to create the directory tree locally works fine. - Chrooting into it works fine for all tools that don’t need
/proc,/dev, or/sys. - Packing the
.imgfile is in theory possible, but probably a bit harder to do without mounting. However, there is nothing architecturally stopping it.
What is definitely not possible:
- Mounting
/proc,/dev, and/sysinto the chroot. Since you don't have the permission to call a mount.
There is no difference which tool you use: systemd-spawn will just do the same syscalls under the hood as chroot and therefore face the same problems.
There are a bunch of possible options:
-
Just don’t do stuff that requires
/proc,/dev, or/sys.
But even simple stuff likeapt update/apt installrequires this.
If you just need to change config files and set some symlinks, this probably works fine. -
There are tools which fake the chroot like
proot.
Those tools are nowhere near stable.proot, for example, does not change permissions of files and handles/procfiles poorly. -
Run a Docker container that runs the Raspberry OS and use its
aptto install packages into the extracted image.
I feel like all of these solutions are not good solutions, and I would avoid implementing them, since most of them are waiting to fall apart.
When it comes to giving the container more permissions:
-
--privileged:
AFAIK, this just doesn’t drop the Linux capabilities of the user you are running Docker as
(usually that’s root for Docker and your normal user for Podman). -
--device /dev/fuse:
If FUSE is available on the host OS, this is a versatile tool that allows you to take care of most mounting issues. -
--cap-add SYS_ADMIN:
The capability that you need from--privileged.
Always use this instead of--privileged, since this is the more fine-grained approach
and does not give your Docker container full access toCAP_SYS_ADMIN,CAP_SYS_TIME, etc. -
--device /dev/loop0:
This only helps for mounting the.imgfile.
It does not solve the chroot problem.
Building a Docker container that can do stuff:
If you want to customize a Linux distro using Docker inside a normal CI/CD pipeline, is seemingly impossible
due to the fact that you cannot mount and therefore chroot does not work.
Of course, it is theoretically possible to do everything that can be done by chroot without it,
but this is not very practical.
Therefore, my suggestion for CI/CD:
I am currently testing around with GitHub Actions without Docker.
They look promising, but I can tell you more in a day or two...
Docker to run sdm on macOS/Windows is a different story:
On macOS and Windows, as far as I know, you can simply run the container with --privileged,
which makes running sdm in a container relatively straightforward.
The remaining question is whether Docker on macOS and Windows supports FUSE and loop devices.
Since I don’t use Docker on those platforms, I can’t say for sure.
I will keep you posted on my updates on the CI/CD stuff.
If you tell me for which use cases you wish to have a docker image I can write you the correct ones.
@techtasie Thanks for the detailed writeup! Although I understand the reasons, it's quite disappointing that docker is so restrictive in the areas where sdm and apps with sdm-like requirements are problem children.
I wonder how many other apps there are like apt that don't like /proc, /dev, and /sys to be absent, even though they really don't need any of them. This one is a particular show-stopper for this approach, since without apt, not much point in using sdm!
Happy to implement workarounds if you get to a point where "all I need is X and this will all work". I've looked at the directory tree handling code, and may end up overhauling that anyhow for another purpose,
I've got enough projects for the near term that will keep me from doing any serious docker work, but thank you for the kind offer❣️
BTW since you mentioned it...do you use either -r/--requirement or -c/--constraint in your venv requirements.txt files? sdm needs to copy all provided assets into the IMG during Phase 0 (when the host is accessible) so they can be used in Phase 1 (when the host is not accessible). It's easy enough to deal with a single -r/-c file (although need to do path normalization) but would prefer to avoid the no limits solution if it's a low prio/low usage use case.
I have never heard about the -c option tbh. I usually use -r
I have never heard about the -c option tbh. I usually use -r
And do your -r files refer to other -r files? Do you use more than one -r in a single requirements.txt?
I just the most basic version of requirements.
Just one file with -r that just contains a bunch of package names with versions.
I just the most basic version of requirements.
Just one file with -r that just contains a bunch of package names with versions.
Perfect. Thx!
Here's the emerging doc for the venv plugin. Appreciate your inputs on it. Currently to create a new venv either create or createif must be specified. Is that an onerous hassle?
venv
Create and populate a python virtual environment.
Arguments
All arguments are optional except path, which is required.
-
chown — Set the owner:group of the created venv as specified by the
chownvalue - create — Create the venv at the provided path. Fail if it already exists.
- createif — Create the venv at the provided path if it doesn't exist. Fail if it already exists and is not a venv ($path/pyenv.cfg doesn't exist)
-
createoptions — Switches to add to the
python -m venvcommand - install — Comma-separated list of pip modules to install
-
installoptions — Switches to add to the
pip installcommand -
list — After the venv has created, list the installed modules with
pip list - path — Path to venv directory specifies the path to the venv
- requirements — /path/to/requirements-file. See Requirements file format
-
runphase — Specify the phase when the venv should be created. Values are
phase1orpost-install - pyver — Specify the python version. Not currently used, and set to "3"
NOTES:
- If provided, a requirements file may use
-rand/or-c. Those files, however, may NOT have any further nesting.
Examples
-
--plugin venv:"path=/home/bls/myvenv|create|list|chown=bls:users|install=urllib3,requests|requirements=/ssdy/work/myrqs.txt"— Installs modulesurllib3,requestsand any modules listed in the requirements.txt file.
@gitbls, thanks for the preview version. I will test it out in a few days.
@gitbls, thanks for the preview version. I will test it out in a few days.
Uh...Guess I wasn't clear. What I posted is the doc for your review. Code coming next week.