[docs]: Development documentation: How to test code changes in my local cloud-init repo when spinning up a new VM?
Documentation request
Hi, I have started working on a bug (my first) in cloud-init but I want to understand how I can manually check that my code is working.
I have went through the documentation and couldn't find anything about local deployment. The closest I found was the how to test the pre-release cloud-init tutorial.
My guess is to build the python wheel out of the source code then installing it on the VM.
The other part of my question is how to test my local package on the first boot of the VM (which is the intended use case of cloud-init) and not via sudo cloud-init clean --logs --reboot (which is more of a workaround). Is it some virt-customize magic on the cloud image before running the VM?
If someone couldjust share their script in this issue without necessarily working on updating the documentation, that would be great as well to unblock me. Thanks!
@mostafaCamel , agreed that this kind of information would be helpful in the documentation.
We don't actually use the wheel at all. Cloud-init is packaged slightly differently per distro, so we try to stick to distro-level build scripts when needed.
The upstream devs primarily use LXD for "quick and dirty" checking. LXD makes it easy to mount the cloud-init source into the container. The process looks something like:
lxc init ubuntu-daily:plucky mytest
lxc config device add mytest host-cloud-init disk source=$REPO_BASE/cloudinit path=/usr/lib/python3/dist-packages/cloudinit
lxc config set mytest cloud-init.user-data="$(cat ./user-data)"
lxc start mytest
# wait for start
lxc shell mytest
Cloud-init's LXD tutorial may be helpful here.
If that's not good enough (e.g., changes are related to systemd services), we'll usually use the packages/bddeb script to build a debian package (NOT FOR PRODUCTION USAGE) and deploy that where it needs to go:
DEB_BUILD_OPTIONS=nocheck packages/bddeb -d
# SCP the generated cloud-init_all.deb to your test instance, then on the test instance run:
apt install ./cloud-init_all.deb
cloud-init clean --logs --configs all --reboot
# Check artifacts upon reboot
There are scripts for other distros in the packages directory, but YMMV. If those don't work, you'll need to find the build scripts used by your target distro.
Thank you!
Adding some macos-specific steps to run lxd. Adapted from this page: https://stevemcgrath.io/post/2019-11-27-lxd-based_labs )
Notes:
- lxd cannot run natively on macos so it needs to run from a VM
- When mounting into the lxd container (where cloud-init with your changes will run), the source directory needs to be in the lxd server (the VM running lxd). You can mount into the lxd server then mount it into the lxd container
- VMs have issues mounting from certain directories in macOS (Google "Full Disk Access" setting to realize this headache). So I usually just ditto my changes into
/tmp/cloudiniton my macOS then mount this copy as /tmp is not covered by the "Full Disk Access" setting in macOS).
Steps on your local macOS:
brew cask install multipass
brew install lxc
cat > /tmp/lxd_cloud_init.yaml <<EOF
#cloud-config
# These things all need to happen in this order. Sadly this seems to be the
# only way to get all of this to work correctly.
runcmd:
- [sudo, apt, -y, remove, lxd, lxd-client]
- [sudo, snap, install, lxd]
- [sudo, lxd, init, --auto, --network-address, "[::]", --trust-password, "lxd-island", --storage-backend=zfs, --storage-create-loop=40]
- [usermod, -aG, lxd, multipass]
EOF
multipass launch -n lxd --cloud-init /tmp/lxd_cloud_init.yaml
lxc remote add labbox 192.168.64.30 --accept-certificate --password "lxd-island" # For your case, get the ip via `multipass info lxd`
lxc remote switch labbox
multipass mount /tmp/cloudinit lxd:/tmp/cloudinitsource #if you plan to restart the lxd server, then /tmp/cloudinitsource is probably not a good idea and you need to choose another directory not under /tmp
then you can use James' script ( with the source being source=/tmp/cloudinitsource ) .
Running integration tests on macOS (without using a remote cloud)
There are probably better ways to do it on macOS (and even better: do it on a linux-kernel OS) but this info dump may be useful for future searchers
Notes
- We can't run the integration tests natively on macOS (there seems to be some commands in the integration setup that rely on linux system calls/commands)
- So I need to spin a multipass VM then run the integration tests
- Nested virtualizations work on linux-kernel OSes but you may need to increase the disk and memory of the "lower-level" VM. Unfortunately, nested virtualization does not work on macOS, which is what I am using (apparently there is new support for nester virtualizations in macOS15 but it seems to be finicky)
- So in the multipass VM I need to start the tests in a local lxd_container and not in a nested VM.
Steps
- On your local macOS:
- Spin a multupass VM (see the comment above
- I needed to launch the VM with 10GB instead of the default 5GB as the test failed with "out of space" when I tried the 5GB-default VM
ditto ${BASE_REPO} /tmp/cloud-init ; git -C /tmp/cloud-init clean -xdf(moving to /tmp to avoid the "Full Disk Access" macos shenanigans and doing git clean to remove all the local env stuff
- Spin a multupass VM (see the comment above
- On the multipass VM:
# On the multipass VM
mkdir ~/.config ; echo "[lxd]" > ~/.config/pycloudlib.toml # to be able to use lxd in the tests
cp -pr /tmp/cloud-init /tmp/cloud-init2 #
#in the mounted /tmp/cloud-init, tox seems to be confused by the python environment of the host macOS... probably the tox configuration needs to have base_python . So we will just ditto into a VM-specific directory then empty out (to save space) the mpunted directory
rm -rf /tmp/cloud-init/{*,.*}
cd /tmp/cloud-init2
ssh-keygen #to generate public and private keys in the VM, because they are needed for the tests
sudo apt install tox
CLOUD_INIT_PLATFORM='lxd_container' CLOUD_INIT_OS_IMAGE='noble' tox -e integration-tests -- tests/integration_tests/cmd/test_status.py::test_status_json_errors
# in macOS we need `CLOUD_INIT_PLATFORM='lxd_container' because nested virtualization most probably won't work on macOS so we can jsut have a container on top of the VM
- Then it's the "usual" steps in https://cloudinit.readthedocs.io/en/latest/development/integration_tests.html
Modifying cloud-config before manual re-runs of cloud-init in the instance
Sometimes while developing, you need to run the module against different configs to verify the behavior. You can edit the cloud-config in the instance at /var/lib/cloud/instance/cloud-config.txt then rerun the modules (example: sudo cloud-init single --name cc_rh_subscription --frequency always)
Using RHEL during local development
There does not seem to be rhel images for lxd containers. So it seems that the only way to locally-develop cloud-init for RHEL is to use QEMU.
- Download the qcow2 file (also called KVM image) with the appropriate rhel version and VM processor architecture from RedHat's downloads page . You can also find other artifacts in their developers page
- You will need a RedHat subscription. Depending on your situation, you may be able to get a free RedHat Developer subscription
- In
user-data, make sure to have the rh_subscription module to activate your subscription on the VM and to be able to use dnf. If you do not do it via cloud-init, you will need to do it via thesubscription-managercli on the VM after launch. In a separate terminal tab on your local host:
rm -rf /tmp/cloud-init ; mkdir /tmp/cloud-init && touch /tmp/cloud-init/{user,meta,vendor}-data
cat > /tmp/cloud-init/user-data <<EOF
#cloud-config
password: 'somepasswordyouset'
chpasswd:
expire: False
rh_subscription:
username: 'yourredhatusername'
password: 'yourredhatpassword'
packages:
- vim
EOF
cd /tmp/cloud-init
python3 -m http.server --directory .
-
Make sure you are using the redhat username not the email registered to the account
-
The vim install is to be able to play around. For some reason, the install fails during cloud-init and I had to manually
sudo dnf -y install vimin the instance. -
RedHat usually have the default user to be
cloud-user(set in/etc/cloud/cloud.cfgin the image) but in user-data we set its password (to be able to have password access) -
From the directory containing the qcow2 file, launch the QEMU VM from your local host
qemu-system-x86_64 \
-cpu max \
-m 1G \
-drive if=virtio,file=rhel-10.0-x86_64-kvm.qcow2 \
-nic user,model=virtio-net-pci,hostfwd=tcp:127.0.0.1:2222-:22 \
-snapshot \
-nographic \
-smbios type=1,serial=ds='nocloud;s=http://10.0.2.2:8000/'
-
Note: I have an Apple Silicon (aarch64) Mac but I still used qemu-system-x86_64 (with the x86_64 qcow2) instead of qemu-system-aarch64 (with the aarch64 qcow2) which comes with a performance penalty due to the emulation. I tried to get qemu-system-aarch64 to work but it's just a pain and I never got it to work (EFI sadness): examples here and here
-
After the instance launch, you can play around with the cloud-init scripts on the VM at
/usr/lib/python3.12/site-packages/cloudinit(instead of the usual/usr/lib/python/dist-packages/cloudinitof the debian distros) and rerun cloud-init modules manually. To my knowledge, there is no way to mount your cloud-init changes before the first boot of the QEMU VM. -
Unrelated tip to not go crazy while using vim on the RHEL VM: make sure to
:set pasteto be able to paste without autoindentation (which sucks)