dashboard icon indicating copy to clipboard operation
dashboard copied to clipboard

ClusterTemplate validation fails

Open mfranczy opened this issue 1 year ago • 3 comments

What happened

ClusterTemplate validation fails. Info from our customer:

Back in KKP 2.20 (or earlier), some of our users created ClusterTemplates. Naturally, some of these used Kubernetes versions that are no longer supported in KKP 2.21, particularly 1.21.x.

Now when these users try to create a cluster from a ClusterTemplate (through the KKP dashboard), the get a green success message. Most of the time the cluster never materializes (neither in the UI nor as a Cluster resource), but we've also had two Clusters that got created (but that I've since deleted) but were stuck in "creating" status because the controller would not accept the request including these versions. I wonder whether these two clusters were maybe created over the KKP API rather than the dashboard.

The problem aside from the bad UX is that some component remembers that these clusters are supposed to be created, but the seed controller keeps denying doing so. Not sure whether the former (unknown) component will eventually give up, but this does generate some load on the seed controller.

So, cluster creation from ClusterTemplates should be verified earlier somehow. I guess CTs that feature an outdated version should be greyed in the dashboard out or something, and not be usable for cluster creation.

I was able to recreate issue on our dev cluster.

Expected behavior

ClusterTemplates are validated the same way as Cluster objects. Dashboard shows error in case of invalid k8s version in the template.

How to reproduce

  1. Create a cluster template
  2. Edit the cluster template resource directly on seed and change its version to not supported one
  3. Try to create a cluster from the cluster template

Environment

  • UI Version: KKP 2.22, KKP 2.21
  • API Version: KKP 2.22, KKP 2.21

Current workaround

Update ClusterTemplate object by hand.

mfranczy avatar Mar 31 '23 13:03 mfranczy

https://github.com/kubermatic/dashboard/issues/5480 might cover this issue as well.

Waseem826 avatar Mar 31 '23 19:03 Waseem826

Issues go stale after 90d of inactivity. After a furter 30 days, they will turn rotten. Mark the issue as fresh with /remove-lifecycle stale.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kubermatic-bot avatar Mar 14 '24 12:03 kubermatic-bot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

kubermatic-bot avatar Apr 13 '24 12:04 kubermatic-bot

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

/close

kubermatic-bot avatar May 13 '24 12:05 kubermatic-bot

@kubermatic-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

kubermatic-bot avatar May 13 '24 12:05 kubermatic-bot