Instructions to use sdm inside a Docker container
Thank you🙏Will have a look!
Thank you🙏Will have a look!
If you are new to docker, https://docs.docker.com/engine/install/debian/ has the instructions to get the 'hello-world' example running.
I must be dumb today. I set up a new system, installed docker, set up the working directory, and ran the file:
sudo docker run --privileged --network host --rm \
-v "$(pwd)/myscript.sh:/myscript.sh" \
-v "$(pwd)/working_dir:/root/sdm_working_dir" \
--device=/dev/loop-control \
--device=/dev/loop0 \
debian:latest bash /myscript.sh
which gave the following output:
pit~# ./dodocker
Unable to find image 'debian:latest' locally
latest: Pulling from library/debian
82312fccb35f: Pull complete
Digest: sha256:17122fe3d66916e55c0cbd5bbf54bb3f87b3582f4d86a755a0fd3498d360f91b
Status: Downloaded newer image for debian:latest
/myscript.sh: /myscript.sh: Is a directory
My questions:
- Why is myscript.sh a directory?
- If it's supposed to be a file, which it looks like it should be, what should be in it?
- What does
-v "$(pwd)/working_dir:/root/sdm_working_dir"do? Looks like it creates a bind mount? - According to
man docker runthe--deviceswitch args areonhost:incontainer. The example you provided only has a single value. What does it do? (Can't try it yet due to what is myscript.sh)
Thx!
- Why is myscript.sh a directory?
- If it's supposed to be a file, which it looks like it should be, what should be in it?
Indeed, myscript.sh should be a file, not a directory. See lines 13 to 38 of the as to what could/should be in that file.
- What does
-v "$(pwd)/working_dir:/root/sdm_working_dir"do? Looks like it creates a bind mount?
Yes. It takes a directory from the host, and makes its content (here: the to be customized image) available to the docker container. Also, when the container writes into this directory (here: the burned image file), the content becomes available on the host.
$(pwd)/working_dir is the path on the host. /root/sdm_working_dir is the path inside of the container.
- According to
man docker runthe--deviceswitch args areonhost:incontainer. The example you provided only has a single value. What does it do? (Can't try it yet due to what is myscript.sh)
It makes a device from the host (here: the loop device and its control) available to the container. So when the container calls losetup, actually a loop device on the host is created.
--device foo is just short for --device foo:foo
Indeed, when everything is set up properly (which I would have done first go if I had read your notes a bit more carefully 🫤) it works.
So far, the only thing I see that I don't care for is that each time I do the docker run it downloads all the apt packages and reclones sdm. Is it possible to configure the container separately and only once?
Thx!
Indeed, when everything is set up properly (which I would have done first go if I had read your notes a bit more carefully 🫤) it works.
Let me know where the language or format could be clearer.
So far, the only thing I see that I don't care for is that each time I do the
docker runit downloads all the apt packages and reclones sdm. Is it possible to configure the container separately and only once?
Yes, that is possible. However (a) the majority of the runtime is spent in qemu doing an apt distupgrade inside of the new image, i.e. the container creation and setup takes only a fraction (for me <10s) of the total runtime (>10min) and (b) if the container is persisted, I would have to worry about when and how to manage that state and (c) one would likely create a Dockerfile to make it nice, which would make the instructions even longer.
Therefore I didn't bother. If you care strongly about this, I'll add the instructions, but I think it is not worth it.
Yes, that is possible. However (a) the majority of the runtime is spent in qemu doing an
apt distupgradeinside of the new image, i.e. the container creation and setup takes only a fraction (for me <10s) of the total runtime (>10min) and (b) if the container is persisted, I would have to worry about when and how to manage that state and (c) one would likely create aDockerfileto make it nice, which would make the instructions even longer.Therefore I didn't bother. If you care strongly about this, I'll add the instructions, but I think it is not worth it.
If anyone tries to use this at the far end of a slow internet connection, they will be unhappy. I would like to see how to do it statefully as well, as an alternative. Thx!
To do this in a stateful manner, it is possible to split the run into three commands, container create, container start and container rm. The first and last needs to be run once, the middle one can be repeated. So the instructions would become
sudo docker container create --name sdm --privileged --network host \
-v "$(pwd)/myscript.sh:/myscript.sh" \
-v "$(pwd)/working_dir:/root/sdm_working_dir" \
--device=/dev/loop-control \
--device=/dev/loop0 \
debian:latest bash /myscript.sh
sudo docker container start -a sdm # repeate as desired
sudo docker container rm sdm
However, I'm not convinced this is worth it. The user might forget to run the container rm command, and waste space. Also I don't buy the "save bandwidth on a slow connection" argument, because the user probably just downloaded the 500MB Raspberry image, just downloaded the 100MB debian docker image, and is about to execute a apt distupgrade inside qemu. So yes, one can save the apt upgrade for the debian image, but it won't save much in the larger context. The bandwidth for cloning sdm from github is negligible compared to all the other steps.
Thx. Will have another look at this when I'm free in about 2 weeks.
I've run into an issue in working through this. Pretty sure I didn't see this before, but now I am, and I'm stuck.
dodocker
sudo docker run --privileged --network host --rm \
-v "$(pwd)/myscript.sh:/myscript.sh" \
-v "$(pwd)/work:/root/work" \
-v "/mnt/sdm:/root/mnt" \
--device=/dev/loop-control \
--device=/dev/loop0 \
debian:latest bash /myscript.sh
myscript.sh
#!/bin/bash
export DEBIAN_FRONTEND=noninteractive
apt-get update
apt-get --yes --no-install-recommends install binfmt-support systemd binfmt-support gdisk qemu-user-static systemd-container uuid
systemd-nspawn -D /root/mnt bash
exit
I mounted a customized IMG on /mnt/sdm elsewhere before running dodocker.
Here's the abbreviated output:
bls@pit~> ./dodocker
<skipped apt update and install output>
Spawning container mnt on /root/mnt.
Press ^] three times within 1s to kill container.
Failed to open system bus: No such file or directory
Attempted to remove disk file system under "/run/systemd/nspawn/propagate/mnt", and we can't allow that.
Primary issue is the 'failed to open system bus'.
The 'attempted to remove disk' error is probably due to the hokie way I have it mounted, so not an issue (yet 🫤)
Thx for your assistance!
Looks to me like you try to use systemd-nspawn inside of the docker container. I'm not sure that is supported. Qemu does work, though.
I narrowed down to this test because using sdm per your instructions was getting the error I noted. But you didn't specify using --chroot, how did sdm work for you?
This is working for me on Windows:
docker run --privileged --network host --rm -v "%cd%/myscript.sh:/myscript.sh" -v "%cd%/working_dir:/root/sdm_working_dir" --device=/dev/loop-control --device=/dev/loop0 debian:latest bash /myscript.sh
@omya3qno Is it possible to use this approach when building docker image? I have not found how can I attach loop device while building.
For my CI-execution scenario, this approach couldn’t work because it assumes mapping loop0 is sufficient for sdm to operate. I ended up having to map the entire /dev to allow sdm the 'freedom' to map sdm and boot/firmware. But it works. Therefore, various scenarios of running sdm as part of a CI pipeline have been proven possible.
For my CI-execution scenario, this approach couldn’t work because it assumes mapping loop0 is sufficient for sdm to operate. I ended up having to map the entire /dev to allow sdm the 'freedom' to map sdm and boot/firmware. But it works. Therefore, various scenarios of running sdm as part of a CI pipeline have been proven possible.
I agree with you.
I think the only way to do it without needing the /dev would be to extract the image content. Run sdm over the content using chroot and then repackage an .img file. Due to the way mounting works on linux: It will not be possible to mount the .img file from within a container.... at least without the --device flag
I know it is a bit ugly to unpack an .img file and repack it but I guess it is the only option.