cluster-api-provider-kubevirt icon indicating copy to clipboard operation
cluster-api-provider-kubevirt copied to clipboard

failed to patch workload cluster node

Open HenryGuo1019 opened this issue 11 months ago • 2 comments

What steps did you take and what happened: I create a cluster with kubevirt and rke2 cap,when i test create controle-plane with 1 replicas , the machine blocked in provisioned state because of "failed to patch workload cluster node".

root@st1:/home/test/cluster-api/test-yaml# kubectl get machine -n test01 
NAME                         CLUSTER   NODENAME   PROVIDERID                              PHASE         AGE   VERSION
test01-control-plane-9klv4   test01               kubevirt://test01-control-plane-fqwg9   Provisioned   52m   v1.28.3

root@st1:/home/test/cluster-api/test-yaml# kubectl logs -n capi-system capk-controller-manager-5bdc75f5c9-8xnbz --tail 200
I0318 03:51:26.760733       1 kubevirtmachine_controller.go:558] test01/test01-control-plane-fqwg9 "msg"="Add capk user with ssh config to bootstrap userdata" "KubevirtMachine"={"name":"test01-control-plane-fqwg9","namespace":"test01"} "controller"="kubevirtmachine" "controllerGroup"="infrastructure.cluster.x-k8s.io" "controllerKind"="KubevirtMachine" "name"="test01-control-plane-fqwg9" "namespace"="test01" "reconcileID"="1c6ef5f3-0a39-4c1f-8ced-780c14312041"
I0318 03:51:26.785607       1 kubevirtmachine_controller.go:404] test01/test01-control-plane-fqwg9 "msg"="Patching node with provider id..." "KubevirtMachine"={"name":"test01-control-plane-fqwg9","namespace":"test01"} "controller"="kubevirtmachine" "controllerGroup"="infrastructure.cluster.x-k8s.io" "controllerKind"="KubevirtMachine" "name"="test01-control-plane-fqwg9" "namespace"="test01" "reconcileID"="1c6ef5f3-0a39-4c1f-8ced-780c14312041"
I0318 03:51:26.797664       1 controller.go:327]  "msg"="Warning: Reconciler returned both a non-zero result and a non-nil error. The result will always be ignored if the error is non-nil and the non-nil error causes reqeueuing with exponential backoff. For more details, see: https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/reconcile#Reconciler" "KubevirtMachine"={"name":"test01-control-plane-fqwg9","namespace":"test01"} "controller"="kubevirtmachine" "controllerGroup"="infrastructure.cluster.x-k8s.io" "controllerKind"="KubevirtMachine" "name"="test01-control-plane-fqwg9" "namespace"="test01" "reconcileID"="1c6ef5f3-0a39-4c1f-8ced-780c14312041"
E0318 03:51:26.797741       1 controller.go:329]  "msg"="Reconciler error" "error"="failed to patch workload cluster node: Node \"test01-control-plane-fqwg9\" is invalid: spec.providerID: Forbidden: node updates may not change providerID except from \"\" to valid" "KubevirtMachine"={"name":"test01-control-plane-fqwg9","namespace":"test01"} "controller"="kubevirtmachine" "controllerGroup"="infrastructure.cluster.x-k8s.io" "controllerKind"="KubevirtMachine" "name"="test01-control-plane-fqwg9" "namespace"="test01" "reconcileID"="1c6ef5f3-0a39-4c1f-8ced-780c14312041"

What did you expect to happen: When bootstrap done , the status of machine should be running and the node name should be gernerated correctly

Environment:

  • Cluster-api version: capi-operator-cluster-api-operator:v0.9.0
  • Cluster-api-provider-kubevirt version: capk-manager:v0.1.8
  • Kubernetes version: (use kubectl version): v1.22.6+rke2r1
  • KubeVirt version: virt-api:v0.59.0
  • OS (e.g. from /etc/os-release): ubuntu-22.04

HenryGuo1019 avatar Mar 18 '24 06:03 HenryGuo1019

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jun 16 '24 06:06 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jul 16 '24 07:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Aug 15 '24 07:08 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Aug 15 '24 07:08 k8s-ci-robot