cluster-api-provider-openstack
cluster-api-provider-openstack copied to clipboard
kubelet won't start on ubuntu 20.04
/kind bug
What steps did you take and what happened:
creating an ubuntu 20.04 compatible image for CAPO, kubelet doesn't start, with a weird message:
could not init cloud provider openstack: warning: can't store data at section global,variable tls-insecure
What did you expect to happen: kubelet to be perfectly up and running
Anything else you would like to add: ubuntu 20.04 image built was done using CAPI ubuntu 20.04 image builder packer sources
Environment:
- Cluster API Provider OpenStack version (Or
git rev-parse HEADif manually built):1.4 - Cluster-API version: 1.4
- OpenStack version:
opensta- Kubernetes version (use
kubectl version): 1.22.9 - OS (e.g. from
/etc/os-release): ubuntu 20.04
your error likely comes from https://github.com/kubernetes/cloud-provider/blob/master/plugins.go#L167
I think some cloud provider settings might be wrong but with limitted info, this is only guess you might want to check your cloud-provider (external??) settings and see anything interesting
@jichenjc for now the cloud provider is defined as : openstack if i use the external provider template, kubelet starts , but i can't scale up my control plane, because the unitiallized:true taint isn't removed
check this https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/main/templates/cluster-template-external-cloud-provider.yaml and https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/main/docs/book/src/topics/external-cloud-provider.md
external cloud provider is the future
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Looks like this is resolved elsewhere.