Build target proxmox-flatcar fails
Environment
- Make target:
make build-proxmox-flatcar - Run using container image? (Y/N): N
- Environment vars:
- Vars file:
What steps did you take and what happened?
- Pull repository
make deps-proxmox- set env vars per docs
- run
make build-proxmox-flatcar - wait for VM to start, SSH never becomes available
- Check VM console
- Error
sed cannot read /tmp/ignition.json no such file or directory
What did you expect to happen?
VM should reboot with open ssh interface for template configuration to proceed. A proxmox flatcar kubernetes template should exist at the end.
Relevant log output
Log Output
Anything else you would like to add?
Can include the rest of the local logs but realistically that won't add anything. Nothing was set aside from the env vars in the docs.
/kind bug
Just realized that I should probably be on a release tag... Switching to v0.1.40 I don't even see a flatcar target for Proxmox so maybe it's not ready for prime time like the other flatcar targets?
Yeah, looks like we haven't done a release yet since it was introduced here https://github.com/kubernetes-sigs/image-builder/pull/1589
We're completely lacking testing for Proxmox so we rely completely on users like you to help us with this 😅 Sorry about that!
Looking at the error my guess is that boot command is not waiting for the curl command to complete before trying to modify the file.
Could you please try setting the following values and report back if it changes things:
{
"boot_command_prefix": "sudo systemctl mask sshd.socket --now<enter><wait>curl -sLo /tmp/ignition.json https://raw.githubusercontent.com/kubernetes-sigs/image-builder/21f6a77a9a46a217949579d52f7b671568521678/images/capi/packer/files/flatcar/ignition/bootstrap-pass-auth.json && sed -i \"s|BUILDERPASSWORDHASH|$(mkpasswd -5 {{user `ssh_password`}})|\" /tmp/ignition.json && sudo flatcar-install -d /dev/sda -C {{user `channel_name`}} -V {{user `release_version`}} -i /tmp/ignition.json && sudo reboot<enter>",
"boot_command_suffix": ""
}
Those commands work for me, but when the VM reboots it comes up with a different IP and the SSH connection fails with:
2025/02/20 12:17:21 packer-plugin-proxmox_v1.2.2_x5.0_linux_amd64 plugin: 2025/02/20 12:17:21 [DEBUG] Detected authentication error. Increasing handshake attempts.
2025/02/20 12:17:28 packer-plugin-proxmox_v1.2.2_x5.0_linux_amd64 plugin: 2025/02/20 12:17:28 [INFO] Attempting SSH connection to 10.4.70.74:22...
2025/02/20 12:17:28 packer-plugin-proxmox_v1.2.2_x5.0_linux_amd64 plugin: 2025/02/20 12:17:28 [DEBUG] reconnecting to TCP connection for SSH
2025/02/20 12:17:28 packer-plugin-proxmox_v1.2.2_x5.0_linux_amd64 plugin: 2025/02/20 12:17:28 [DEBUG] handshaking with SSH
2025/02/20 12:17:30 packer-plugin-proxmox_v1.2.2_x5.0_linux_amd64 plugin: 2025/02/20 12:17:30 Keyboard interactive challenge:
2025/02/20 12:17:30 packer-plugin-proxmox_v1.2.2_x5.0_linux_amd64 plugin: 2025/02/20 12:17:30 -- User:
2025/02/20 12:17:30 packer-plugin-proxmox_v1.2.2_x5.0_linux_amd64 plugin: 2025/02/20 12:17:30 -- Instructions: The account is locked due to 5 failed logins.
2025/02/20 12:17:30 packer-plugin-proxmox_v1.2.2_x5.0_linux_amd64 plugin: 2025/02/20 12:17:30 [DEBUG] SSH handshake err: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none keyboard-interactive], no supported methods remain
Mine is stuck here
Used this flatcar.json with the latest flatcar iso downloaded manually
{
"ansible_extra_vars": "ansible_python_interpreter=/opt/bin/python oem_id={{user `oem_id`}}",
"boot_command_prefix": "sudo systemctl mask sshd.socket --now<enter><wait>curl -sLo /tmp/ignition.json https://raw.githubusercontent.com/kubernetes-sigs/image-builder/21f6a77a9a46a217949579d52f7b671568521678/images/capi/packer/files/flatcar/ignition/bootstrap-pass-auth.json && sed -i \"s|BUILDERPASSWORDHASH|$(mkpasswd -5 {{user `ssh_password`}})|\" /tmp/ignition.json && sudo flatcar-install -d /dev/sda -C {{user `channel_name`}} -V {{user `release_version`}} -i /tmp/ignition.json && sudo reboot<enter>",
"boot_command_suffix": "",
"boot_media_path": "http://{{ .HTTPIP }}:{{ .HTTPPort }}",
"boot_wait": "180s",
"build_name": "flatcar-{{env `FLATCAR_CHANNEL`}}-{{env `FLATCAR_VERSION`}}",
"channel_name": "{{env `FLATCAR_CHANNEL`}}",
"cores": "1",
"crictl_source_type": "http",
"distribution_version": "{{env `FLATCAR_CHANNEL`}}-{{env `FLATCAR_VERSION`}}",
"distro_name": "flatcar",
"guest_os_type": "linux-64",
"http_directory": "./packer/files/flatcar/ignition/",
"iso_checksum": "https://{{env `FLATCAR_CHANNEL`}}.release.flatcar-linux.net/amd64-usr/{{env `FLATCAR_VERSION`}}/flatcar_production_iso_image.iso.DIGESTS.asc",
"iso_checksum_type": "file",
"iso_file": "{{env `ISO_FILE`}}",
"kubernetes_cni_source_type": "http",
"kubernetes_source_type": "http",
"oem_id": "proxmoxve",
"os_display_name": "Flatcar Container Linux ({{env `FLATCAR_CHANNEL`}} channel release {{env `FLATCAR_VERSION`}})",
"python_path": "/opt/bin/builder-env/site-packages",
"release_version": "{{env `FLATCAR_VERSION`}}",
"shutdown_command": "shutdown -P now",
"systemd_prefix": "/etc/systemd",
"sysusr_prefix": "/opt",
"sysusrlocal_prefix": "/opt",
"unmount_iso": "true"
}
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.