image-builder
image-builder copied to clipboard
[capi/qemu] Add ubuntu 22.04 support for qemu
What this PR does / why we need it: Ubuntu 22.04 replaced preseed by autoinstall. Due to this change, we have to migrate the current preseed config to autoinstall config and provide it via cloud init to the qemu vm.
Which issue(s) this PR fixes (optional, in fixes #
Additional context Helpfull links:
- autoinstall reference manual
- autoinstall quicklstart
- disk configuration details by curtin storage documentation
Dennis Lerch [email protected], Mercedes-Benz Tech Innovation GmbH, Provider Information
The committers listed above are authorized under a signed CLA.
- :white_check_mark: login: Meecr0b / name: Dennis Lerch (23a796df2783329c7358ce4fd5a801c91eaa743d)
Hi @Meecr0b. Thanks for your PR.
I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test
on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test
label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/ok-to-test
not sure what happen there. looks like stuff took a long time to install. Can look more closely if it continues /retest
Would be great to add the comments from e.g. https://github.com/kubernetes-sigs/image-builder/blob/70dbabcafc580da0bd98eb81fe8eefdd574d4e22/images/capi/packer/raw/linux/ubuntu/http/base/preseed.cfg#L111-L118 to the new setup file https://github.com/kubernetes-sigs/image-builder/blob/42c89bb3bbfff6b196b41f8953e73b315264ab96/images/capi/packer/ova/linux/ubuntu/http/22.04/user-data#L49 to keep the reason for all the specific settings.
The code snippets are just one example.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Meecr0b
Once this PR has been reviewed and has the lgtm label, please assign vincepri for approval by writing /assign @vincepri
in a comment. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
Would be great to add the comments from e.g.
https://github.com/kubernetes-sigs/image-builder/blob/70dbabcafc580da0bd98eb81fe8eefdd574d4e22/images/capi/packer/raw/linux/ubuntu/http/base/preseed.cfg#L111-L118
to the new setup file https://github.com/kubernetes-sigs/image-builder/blob/42c89bb3bbfff6b196b41f8953e73b315264ab96/images/capi/packer/ova/linux/ubuntu/http/22.04/user-data#L49
to keep the reason for all the specific settings. The code snippets are just one example.
added some comments
/retest
/retest
Test probably fails due to https://github.com/kubernetes-sigs/image-builder/pull/1001
/retest
@Meecr0b: The following test failed, say /retest
to rerun all failed tests or /retest-required
to rerun all mandatory failed tests:
Test name | Commit | Details | Required | Rerun command |
---|---|---|---|---|
pull-azure-sigs | 9803595838c1da95d7b736537bfd3d894f6edd0a | link | false | /test pull-azure-sigs |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
Seems like the CI is working again, retrying test/ci :) /retest
CI is flaky, related to https://github.com/kubernetes-sigs/image-builder/pull/1021
I tried building this PR and got the following error;
╰─ λ make build-qemu-ubuntu-2204
hack/ensure-ansible.sh
Starting galaxy collection install process
Nothing to do. All requested collections are already installed. If you want to reinstall them, consider using `--force`.
hack/ensure-packer.sh
hack/ensure-goss.sh
Right version of binary present
packer build -var-file="/home/cwr/work/***/image-builder/images/capi/packer/config/kubernetes.json" -var-file="/home/cwr/work/***/image-builder/images/capi/packer/config/cni.json" -var-file="/home/cwr/work/***/image-builder/images/capi/packer/config/containerd.json" -var-file="/home/cwr/work/***/image-builder/images/capi/packer/config/ansible-args.json" -var-file="/home/cwr/work/***/image-builder/images/capi/packer/config/goss-args.json" -var-file="/home/cwr/work/***/image-builder/images/capi/packer/config/common.json" -var-file="/home/cwr/work/***/image-builder/images/capi/packer/config/additional_components.json" -color=true -var-file="/home/cwr/work/***/image-builder/images/capi/packer/qemu/qemu-ubuntu-2204.json" packer/qemu/packer.json
qemu: output will be in this color.
==> qemu: Retrieving ISO
==> qemu: Trying https://releases.ubuntu.com/22.04/ubuntu-22.04.1-live-server-amd64.iso
==> qemu: Trying https://releases.ubuntu.com/22.04/ubuntu-22.04.1-live-server-amd64.iso?checksum=sha256%3A10f19c5b2b8d6db711582e0e27f5116296c34fe4b313ba45f9b201a5007056cb
==> qemu: https://releases.ubuntu.com/22.04/ubuntu-22.04.1-live-server-amd64.iso?checksum=sha256%3A10f19c5b2b8d6db711582e0e27f5116296c34fe4b313ba45f9b201a5007056cb => /home/cwr/.cache/packer/281aa9855752339063385b35198e73db74cd61ba.iso
==> qemu: Starting HTTP server on port 8195
==> qemu: Found port for communicator (SSH, WinRM, etc): 2587.
==> qemu: Looking for available port between 5900 and 6000 on 127.0.0.1
==> qemu: Starting VM, booting from CD-ROM
qemu: The VM will be run headless, without a GUI. If you want to
qemu: view the screen of the VM, connect via VNC without a password to
qemu: vnc://127.0.0.1:5950
==> qemu: Waiting 10s for boot...
==> qemu: Connecting to VM via VNC (127.0.0.1:5950)
==> qemu: Typing the boot command over VNC...
qemu: Not using a NetBridge -- skipping StepWaitGuestAddress
==> qemu: Using SSH communicator to connect: 127.0.0.1
==> qemu: Waiting for SSH to become available...
==> qemu: Connected to SSH!
==> qemu: Provisioning with shell script: ./packer/files/flatcar/scripts/bootstrap-flatcar.sh
==> qemu: Provisioning with Ansible...
qemu: Setting up proxy adapter for Ansible....
==> qemu: Executing Ansible: ansible-playbook -e packer_build_name="qemu" -e packer_*****_type=qemu -e packer_http_addr=10.0.2.2:8195 --ssh-extra-args '-o IdentitiesOnly=yes' --extra-vars containerd_url=https://github.com/containerd/containerd/releases/download/v1.6.2/cri-containerd-cni-1.6.2-linux-amd64.tar.gz containerd_sha256=91f1087d556ecfb1f148743c8ee78213cd19e07c22787dae07fe6b9314bec121 pause_image=k8s.gcr.io/pause:3.6 containerd_additional_settings= containerd_cri_socket=/var/run/containerd/containerd.sock containerd_version=1.6.2 crictl_url=https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz crictl_sha256=https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz.sha256 crictl_source_type=pkg custom_role_names="" firstboot_custom_roles_pre="" firstboot_custom_roles_post="" node_custom_roles_pre="" node_custom_roles_post="" disable_public_repos=false extra_debs="" extra_repos="" extra_rpms="" http_proxy= https_proxy= kubeadm_template=etc/kubeadm.yml kubernetes_cni_http_source=https://github.com/containernetworking/plugins/releases/download kubernetes_cni_http_checksum=sha256:https://storage.googleapis.com/k8s-artifacts-cni/release/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz.sha256 kubernetes_http_source=https://dl.k8s.io/release kubernetes_container_registry=registry.k8s.io kubernetes_rpm_repo=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 kubernetes_rpm_gpg_key="https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg" kubernetes_rpm_gpg_check=True kubernetes_deb_repo="https://apt.kubernetes.io/ kubernetes-xenial" kubernetes_deb_gpg_key=https://packages.cloud.google.com/apt/doc/apt-key.gpg kubernetes_cni_deb_version=1.1.1-00 kubernetes_cni_rpm_version=1.1.1-0 kubernetes_cni_semver=v1.1.1 kubernetes_cni_source_type=pkg kubernetes_semver=v1.23.10 kubernetes_source_type=pkg kubernetes_load_additional_imgs=false kubernetes_deb_version=1.23.10-00 kubernetes_rpm_version=1.23.10-0 no_proxy= pip_conf_file= python_path= redhat_epel_rpm=https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm epel_rpm_gpg_key= reenable_public_repos=true remove_extra_repos=false systemd_prefix=/usr/lib/systemd sysusr_prefix=/usr sysusrlocal_prefix=/usr/local load_additional_components=false additional_registry_images=false additional_registry_images_list= additional_url_images=false additional_url_images_list= additional_executables=false additional_executables_list= additional_executables_destination_path= build_target=virt amazon_ssm_agent_rpm= --extra-vars ansible_python_interpreter=/usr/bin/python3 --extra-vars -e ansible_ssh_private_key_file=/tmp/ansible-key1514235873 -i /tmp/packer-provisioner-ansible1823851793 /home/cwr/work/***/image-*****/images/capi/ansible/firstboot.yml
qemu:
qemu: PLAY [all] *********************************************************************
==> qemu: failed to handshake
qemu:
qemu: TASK [Gathering Facts] *********************************************************
qemu: fatal: [default]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Unable to negotiate with 127.0.0.1 port 38327: no matching host key type found. Their offer: ssh-rsa", "unreachable": true}
qemu:
qemu: PLAY RECAP *********************************************************************
qemu: default : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
qemu:
==> qemu: Provisioning step had errors: Running the cleanup provisioner, if present...
==> qemu: Deleting output directory...
Build 'qemu' errored after 10 minutes 39 seconds: Error executing Ansible: Non-zero exit status: exit status 4
==> Wait completed after 10 minutes 39 seconds
==> Some builds didn't complete successfully and had errors:
--> qemu: Error executing Ansible: Non-zero exit status: exit status 4
==> Builds finished but no artifacts were created.
make: *** [Makefile:447: build-qemu-ubuntu-2204] Error 1
Is this expected at the current time? Or is there something I should've done different or can maybe help with?
I tried building this PR and got the following error;
╰─ λ make build-qemu-ubuntu-2204 hack/ensure-ansible.sh Starting galaxy collection install process Nothing to do. All requested collections are already installed. If you want to reinstall them, consider using `--force`. hack/ensure-packer.sh hack/ensure-goss.sh Right version of binary present packer build -var-file="/home/cwr/work/***/image-builder/images/capi/packer/config/kubernetes.json" -var-file="/home/cwr/work/***/image-builder/images/capi/packer/config/cni.json" -var-file="/home/cwr/work/***/image-builder/images/capi/packer/config/containerd.json" -var-file="/home/cwr/work/***/image-builder/images/capi/packer/config/ansible-args.json" -var-file="/home/cwr/work/***/image-builder/images/capi/packer/config/goss-args.json" -var-file="/home/cwr/work/***/image-builder/images/capi/packer/config/common.json" -var-file="/home/cwr/work/***/image-builder/images/capi/packer/config/additional_components.json" -color=true -var-file="/home/cwr/work/***/image-builder/images/capi/packer/qemu/qemu-ubuntu-2204.json" packer/qemu/packer.json qemu: output will be in this color. ==> qemu: Retrieving ISO ==> qemu: Trying https://releases.ubuntu.com/22.04/ubuntu-22.04.1-live-server-amd64.iso ==> qemu: Trying https://releases.ubuntu.com/22.04/ubuntu-22.04.1-live-server-amd64.iso?checksum=sha256%3A10f19c5b2b8d6db711582e0e27f5116296c34fe4b313ba45f9b201a5007056cb ==> qemu: https://releases.ubuntu.com/22.04/ubuntu-22.04.1-live-server-amd64.iso?checksum=sha256%3A10f19c5b2b8d6db711582e0e27f5116296c34fe4b313ba45f9b201a5007056cb => /home/cwr/.cache/packer/281aa9855752339063385b35198e73db74cd61ba.iso ==> qemu: Starting HTTP server on port 8195 ==> qemu: Found port for communicator (SSH, WinRM, etc): 2587. ==> qemu: Looking for available port between 5900 and 6000 on 127.0.0.1 ==> qemu: Starting VM, booting from CD-ROM qemu: The VM will be run headless, without a GUI. If you want to qemu: view the screen of the VM, connect via VNC without a password to qemu: vnc://127.0.0.1:5950 ==> qemu: Waiting 10s for boot... ==> qemu: Connecting to VM via VNC (127.0.0.1:5950) ==> qemu: Typing the boot command over VNC... qemu: Not using a NetBridge -- skipping StepWaitGuestAddress ==> qemu: Using SSH communicator to connect: 127.0.0.1 ==> qemu: Waiting for SSH to become available... ==> qemu: Connected to SSH! ==> qemu: Provisioning with shell script: ./packer/files/flatcar/scripts/bootstrap-flatcar.sh ==> qemu: Provisioning with Ansible... qemu: Setting up proxy adapter for Ansible.... ==> qemu: Executing Ansible: ansible-playbook -e packer_build_name="qemu" -e packer_*****_type=qemu -e packer_http_addr=10.0.2.2:8195 --ssh-extra-args '-o IdentitiesOnly=yes' --extra-vars containerd_url=https://github.com/containerd/containerd/releases/download/v1.6.2/cri-containerd-cni-1.6.2-linux-amd64.tar.gz containerd_sha256=91f1087d556ecfb1f148743c8ee78213cd19e07c22787dae07fe6b9314bec121 pause_image=k8s.gcr.io/pause:3.6 containerd_additional_settings= containerd_cri_socket=/var/run/containerd/containerd.sock containerd_version=1.6.2 crictl_url=https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz crictl_sha256=https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz.sha256 crictl_source_type=pkg custom_role_names="" firstboot_custom_roles_pre="" firstboot_custom_roles_post="" node_custom_roles_pre="" node_custom_roles_post="" disable_public_repos=false extra_debs="" extra_repos="" extra_rpms="" http_proxy= https_proxy= kubeadm_template=etc/kubeadm.yml kubernetes_cni_http_source=https://github.com/containernetworking/plugins/releases/download kubernetes_cni_http_checksum=sha256:https://storage.googleapis.com/k8s-artifacts-cni/release/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz.sha256 kubernetes_http_source=https://dl.k8s.io/release kubernetes_container_registry=registry.k8s.io kubernetes_rpm_repo=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 kubernetes_rpm_gpg_key="https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg" kubernetes_rpm_gpg_check=True kubernetes_deb_repo="https://apt.kubernetes.io/ kubernetes-xenial" kubernetes_deb_gpg_key=https://packages.cloud.google.com/apt/doc/apt-key.gpg kubernetes_cni_deb_version=1.1.1-00 kubernetes_cni_rpm_version=1.1.1-0 kubernetes_cni_semver=v1.1.1 kubernetes_cni_source_type=pkg kubernetes_semver=v1.23.10 kubernetes_source_type=pkg kubernetes_load_additional_imgs=false kubernetes_deb_version=1.23.10-00 kubernetes_rpm_version=1.23.10-0 no_proxy= pip_conf_file= python_path= redhat_epel_rpm=https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm epel_rpm_gpg_key= reenable_public_repos=true remove_extra_repos=false systemd_prefix=/usr/lib/systemd sysusr_prefix=/usr sysusrlocal_prefix=/usr/local load_additional_components=false additional_registry_images=false additional_registry_images_list= additional_url_images=false additional_url_images_list= additional_executables=false additional_executables_list= additional_executables_destination_path= build_target=virt amazon_ssm_agent_rpm= --extra-vars ansible_python_interpreter=/usr/bin/python3 --extra-vars -e ansible_ssh_private_key_file=/tmp/ansible-key1514235873 -i /tmp/packer-provisioner-ansible1823851793 /home/cwr/work/***/image-*****/images/capi/ansible/firstboot.yml qemu: qemu: PLAY [all] ********************************************************************* ==> qemu: failed to handshake qemu: qemu: TASK [Gathering Facts] ********************************************************* qemu: fatal: [default]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Unable to negotiate with 127.0.0.1 port 38327: no matching host key type found. Their offer: ssh-rsa", "unreachable": true} qemu: qemu: PLAY RECAP ********************************************************************* qemu: default : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0 qemu: ==> qemu: Provisioning step had errors: Running the cleanup provisioner, if present... ==> qemu: Deleting output directory... Build 'qemu' errored after 10 minutes 39 seconds: Error executing Ansible: Non-zero exit status: exit status 4 ==> Wait completed after 10 minutes 39 seconds ==> Some builds didn't complete successfully and had errors: --> qemu: Error executing Ansible: Non-zero exit status: exit status 4 ==> Builds finished but no artifacts were created. make: *** [Makefile:447: build-qemu-ubuntu-2204] Error 1
Is this expected at the current time? Or is there something I should've done different or can maybe help with?
This issue is mostly related to #1014
Seems https://github.com/kubernetes-sigs/image-builder/pull/1035 is merged now.
/lgtm
/lgtm
@Meecr0b Looks like the support for vSphere is not fully complete in this PR. Do you want to remove the vSphere specific changes from this PR so that we can get it merged?
@Meecr0b Looks like the support for vSphere is not fully complete in this PR. Do you want to remove the vSphere specific changes from this PR so that we can get it merged?
Hi @kkeshavamurthy, sorry, I don't get it. Which vSphere specific changes do you mean?
@Meecr0b Looks like the support for vSphere is not fully complete in this PR. Do you want to remove the vSphere specific changes from this PR so that we can get it merged?
Hi @kkeshavamurthy, sorry, I don't get it. Which vSphere specific changes do you mean?
Looks like the rebase resolved my suggestions.
/retest
/lgtm
/retest
Error enqueuing build
Edit: oh, I guess the GH actions failed. I don't seem to have permissions to re-run those.
/retest
/test help
@kkeshavamurthy: The specified target(s) for /test
were not found.
The following commands are available to trigger required jobs:
-
/test json-sort-check
-
/test pull-azure-vhds
-
/test pull-goss-populate
-
/test pull-ova-all
-
/test pull-packer-validate
The following commands are available to trigger optional jobs:
-
/test pull-azure-sigs
-
/test pull-container-image-build
-
/test pull-image-builder-gcp-all
Use /test all
to run the following jobs that were automatically triggered:
-
json-sort-check
-
pull-azure-sigs
-
pull-azure-vhds
-
pull-container-image-build
-
pull-ova-all
-
pull-packer-validate
In response to this:
/test help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I have no clue how re-run the checks. @CecileRobertMichon @jsturtevant any ideas?
@Meecr0b would you mind rebasing and pushing this PR, or making some trivial change and squashing it?
We ran into a (presumably transient) error launching the GitHub action code checks, and don't see a way to re-run those manually. Sorry for the hassle!
/lgtm /approve
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: kkeshavamurthy, Meecr0b
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~images/capi/OWNERS~~ [kkeshavamurthy]
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment