cluster-api-provider-openstack icon indicating copy to clipboard operation
cluster-api-provider-openstack copied to clipboard

kubelet won't start on ubuntu 20.04

Open xinity opened this issue 3 years ago • 3 comments

/kind bug

What steps did you take and what happened: creating an ubuntu 20.04 compatible image for CAPO, kubelet doesn't start, with a weird message: could not init cloud provider openstack: warning: can't store data at section global,variable tls-insecure

What did you expect to happen: kubelet to be perfectly up and running

Anything else you would like to add: ubuntu 20.04 image built was done using CAPI ubuntu 20.04 image builder packer sources

Environment:

  • Cluster API Provider OpenStack version (Or git rev-parse HEAD if manually built):1.4
  • Cluster-API version: 1.4
  • OpenStack version: opensta- Kubernetes version (use kubectl version): 1.22.9
  • OS (e.g. from /etc/os-release): ubuntu 20.04

xinity avatar Jun 15 '22 07:06 xinity

your error likely comes from https://github.com/kubernetes/cloud-provider/blob/master/plugins.go#L167

I think some cloud provider settings might be wrong but with limitted info, this is only guess you might want to check your cloud-provider (external??) settings and see anything interesting

jichenjc avatar Jun 17 '22 02:06 jichenjc

@jichenjc for now the cloud provider is defined as : openstack if i use the external provider template, kubelet starts , but i can't scale up my control plane, because the unitiallized:true taint isn't removed

xinity avatar Jun 17 '22 11:06 xinity

check this https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/main/templates/cluster-template-external-cloud-provider.yaml and https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/main/docs/book/src/topics/external-cloud-provider.md

external cloud provider is the future

jichenjc avatar Jun 17 '22 12:06 jichenjc

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 15 '22 12:09 k8s-triage-robot

Looks like this is resolved elsewhere.

mdbooth avatar Oct 03 '22 12:10 mdbooth