k8s-cluster-api-provider
k8s-cluster-api-provider copied to clipboard
Error on recreating/restore
Hi,
i lost 2 nodes on a cluster due hardware failure and wanted to renewe them by running cluster_create, before i pulled to newest version with git pull.
well this happens and i have no clue:
ubuntu@capi-mgmtcluster:~ [0]$ create_cluster.sh sxpone
Switched to context "kind-kind".
> Cluster default already exists in namespace default
Context "kind-kind" modified.
No HTTP_PROXY set, nothing to do, exiting.
No HTTP_PROXY set, nothing to do, exiting.
#Reuse AppCred capi-sxpone-appcred 508eb984bbe34d0b99e3f2fc5235070c
Waiting for image ubuntu-capi-image-v1.26.4 to become active: fb401900-d88d-43d0-a541-c10b582ef709 active
# show used variables for clustertemplate /home/ubuntu/sxpone/cluster-template.yaml
Adding server groups bab4cd40-d1be-4535-8dea-7f29e073b311 and 40a60763-09c5-4a26-b05b-107936fa0760 to /home/ubuntu/sxpone/clusterctl.yaml
Error: no resource matches strategic merge patch "OpenStackMachineTemplate.v1alpha7.infrastructure.cluster.x-k8s.io/${PREFIX}-${CLUSTER_NAME}-control-plane-${CONTROL_PLANE_MACHINE_GEN}.[noNs]": no matches for Id OpenStackMachineTemplate.v1alpha7.infrastructure.cluster.x-k8s.io/${PREFIX}-${CLUSTER_NAME}-control-plane-${CONTROL_PLANE_MACHINE_GEN}.[noNs]; failed to find unique target for patch OpenStackMachineTemplate.v1alpha7.infrastructure.cluster.x-k8s.io/${PREFIX}-${CLUSTER_NAME}-control-plane-${CONTROL_PLANE_MACHINE_GEN}.[noNs]
ERROR: Pass input YAML via stdin (or specify in patch header)
Usage: kustpatch.sh kust1.yaml [kust2.yaml [...]] < base.yaml > result.yaml
Hi @flyersa, At first glance it looks like, you are using the old CAPO controller version; therefore, your resources are not v1alpha7 yet. Can you share some more details? E.g.:
- git version(tag, branch, commit) before and after pull
- CAPI/CAPO version (e.g.
clusterctl upgrade plan
) - was something modified, or default values were used
Also, IMO CAPI and CAPO should ensure consistency between states in k8s and OpenStack and should create deleted machines automatically. So, please check also logs of CAPI/CAPO controllers and the status of k8s resources(e.g. kubectl describe cluster sxpone
or clusterctl describe cluster sxpone
)
sorry were along time on business trips recently. I will check, maybe i forgot some steps ;)
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment, or this will be closed in 60 days.
This issue was closed because it has been stalled for 60 days with no activity.