Can no longer create ubuntu 22.04 image in azure
Environment
- Make target:
make build-azure-sig-ubuntu-2204 - Run using container image? (Y/N): Y
- Environment vars:
PACKER_FLAGS: >-
--var 'extra_debs=linux-cloud-tools-common'
--var 'vm_size=Standard_D4as_v4'
--var 'private_virtual_network_with_public_ip=false'
--var 'virtual_network_name=myvnet'
--var 'virtual_network_subnet_name=mysubnet'
--var 'virtual_network_resource_group_name=myrg'
--var 'resource_group_name=myrg'
--var 'kubernetes_semver=v1.31.6'
--var 'kubernetes_series=v1.31'
--var 'sig_image_version=131.6.0'
--var 'disable_public_repos="true"'
What steps did you take and what happened?
When I run the make target with make build-azure-sig-ubuntu-2204 in my Gitlab CI job, it fails at the beginning due to conflict between variables virtual_network_name and public_ip_sku (see the logs for the exact message).
It failed after image gallery and image definition creation but before any VM operation.
What did you expect to happen?
Packer should not encounter this conflict, especially since virtual_network_name is a documented variable and public_ip_sku was left untouched.
Relevant log output
Log Output
<!--
If you have any relevant build logs that could help debug this issue please include them here
but MAKE SURE ANY SENSITIVE INFO IS REMOVED!
-->
. /home/imagebuilder/packer/azure/scripts/init-sig.sh ubuntu-2204 && /home/imagebuilder/.local/bin/packer build -var-file="/home/imagebuilder/packer/config/kubernetes.json" -var-file="/home/imagebuilder/packer/config/cni.json" -var-file="/home/imagebuilder/packer/config/containerd.json" -var-file="/home/imagebuilder/packer/config/wasm-shims.json" -var-file="/home/imagebuilder/packer/config/ansible-args.json" -var-file="/home/imagebuilder/packer/config/goss-args.json" -var-file="/home/imagebuilder/packer/config/common.json" -var-file="/home/imagebuilder/packer/config/additional_components.json" -var-file="/home/imagebuilder/packer/config/ecr_credential_provider.json" --var 'extra_debs=linux-cloud-tools-common' --var 'vm_size=Standard_D4as_v4' --var 'private_virtual_network_with_public_ip=false' --var 'virtual_network_name=myvnet' --var 'virtual_network_subnet_name=mysubnet' --var 'virtual_network_resource_group_name=myrg' --var 'resource_group_name=myrg' --var 'kubernetes_semver=v1.31.6' --var 'kubernetes_series=v1.31'' --var 'sig_image_version=131.6.0' --var 'disable_public_repos="true"' -color=true -var-file="/home/imagebuilder/packer/azure/azure-config.json" -var-file="/home/imagebuilder/packer/azure/azure-sig.json" -var-file="/home/imagebuilder/packer/azure/ubuntu-2204.json" -only="sig-ubuntu-2204" packer/azure/packer.json
Syntax-only check passed. Everything looks okay.
[image gallery info removed]
WARNING: Starting Build (May) 2024, "az sig image-definition create" command will use the new default values Hyper-V Generation: V2 and SecurityType: TrustedLaunchSupported.
[image definition info removed]
Error: Failed to prepare build: "sig-ubuntu-2204"
1 error(s) occurred:
* If virtual_network_name is specified, public_ip_sku cannot be specified, since
a new network will not be created
Anything else you would like to add?
I am using the docker image of the release v0.1.41.
It looks public_ip_sku was introduced in the default variables in the commit 7fe99b7
/kind bug
@mboersma Have you seen this issue with CAPZ? Looks like the change came in with your updates to the pipelines.
/assign
I have not seen this problem when building CAPZ or other Azure images, but I'll take a look. Sorry about that @aurel333, hopefully you can use image-builder v0.1.40 until this is fixed.
No problem. I am currently using v0.1.39 as v0.1.40 seems to have the same issue too (according to the code, I have not truly tested it)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
@mboersma I finally have time to do a PR and test it (I know it has been a long time since the issue was closed 😅 ) and I see that the issue is still present in the latest release. Is it possible to reopen the issue or should I create a new one to put my PR?
I created the PR . I will edit it if needed