operating-system-manager
operating-system-manager copied to clipboard
Ubuntu 2404 broken on Azure
I tried to spawn 2404 machines in Azure today and got hit by same cloud-init issue as before (just that our latest fix does not work with latest 2404 machines)
# cloud-init-output.log
2024-07-02 05:58:40,392 - util.py[WARNING]: No instance datasource found! Likely bad things to come!
I tried using default offline-osp (v1.5.0) that gets shipped with kubeone 1.8.0 as well as custom OSP. Issue remained same. I also tried to comment out following block because that's where we get datasource error log, but it did not improve the situation.
{{- /* Azure's cloud-init provider integration has changed recently (end of April 2024) and now requires us to run this command below once to set some files up that seem required for another cloud-init run. */}}
{{- if (eq .CloudProviderName "azure") }}
cloud-init init --local
{{- end }}
Steps to reproduce:
- Create MD with below
imageReferencein Azure kubeone cluster
imageReference:
publisher: Canonical
offer: ubuntu-24_04-lts
sku: server
version: 24.04.202406170
- Check cloud-init-output.log
- Also observe that machine never joins the cluster.
Ubuntu 24.04 is not supported yet. Will keep this issue in mind when we work on adding support for it.
/label customer-request
@dharapvj: The label(s) customer-request cannot be applied, because the repository doesn't have them.
In response to this:
/label customer-request
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.
If this issue is safe to close now please do so with /close.
/lifecycle stale
/remove-lifecycle stale
Issue is very pertinent and important even today.
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
/remove-lifecycle rotten
Issue is still valid and we need to address gen2 VMs in Azure
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
/close
@kubermatic-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.