dashboard
dashboard copied to clipboard
ClusterTemplate validation fails
What happened
ClusterTemplate validation fails. Info from our customer:
Back in KKP 2.20 (or earlier), some of our users created ClusterTemplates. Naturally, some of these used Kubernetes versions that are no longer supported in KKP 2.21, particularly 1.21.x.
Now when these users try to create a cluster from a ClusterTemplate (through the KKP dashboard), the get a green success message. Most of the time the cluster never materializes (neither in the UI nor as a Cluster resource), but we've also had two Clusters that got created (but that I've since deleted) but were stuck in "creating" status because the controller would not accept the request including these versions. I wonder whether these two clusters were maybe created over the KKP API rather than the dashboard.
The problem aside from the bad UX is that some component remembers that these clusters are supposed to be created, but the seed controller keeps denying doing so. Not sure whether the former (unknown) component will eventually give up, but this does generate some load on the seed controller.
So, cluster creation from ClusterTemplates should be verified earlier somehow. I guess CTs that feature an outdated version should be greyed in the dashboard out or something, and not be usable for cluster creation.
I was able to recreate issue on our dev cluster.
Expected behavior
ClusterTemplates are validated the same way as Cluster objects. Dashboard shows error in case of invalid k8s version in the template.
How to reproduce
- Create a cluster template
- Edit the cluster template resource directly on seed and change its version to not supported one
- Try to create a cluster from the cluster template
Environment
- UI Version: KKP 2.22, KKP 2.21
- API Version: KKP 2.22, KKP 2.21
Current workaround
Update ClusterTemplate object by hand.
https://github.com/kubermatic/dashboard/issues/5480 might cover this issue as well.
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
/close
@kubermatic-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen
. Mark the issue as fresh with/remove-lifecycle rotten
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.