Check package version against declared GroupVersion
Problem Statement
When inferring the version of a CRD from the package name:
https://github.com/kubernetes-sigs/controller-tools/blob/a57ec68d4aca081c0ead223796f18b7c6c3a61c1/pkg/crd/parser.go#L135-L145
a developer can make a mistake where they don't use the same value for the version for the declared group version in their go file. For example:
// +kubebuilder:object:generate=true
// +groupName=some.awesome.group.com
package v1beta1
import (
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/controller-runtime/pkg/scheme"
)
var (
// GroupVersion is group version used to register these objects
GroupVersion = schema.GroupVersion{Group: "some.awesome.group.com", Version: "v1alpha1"}
// SchemeBuilder is used to add go types to the GroupVersionKind scheme
SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}
// AddToScheme adds the types in this group-version to the given scheme.
AddToScheme = SchemeBuilder.AddToScheme
)
Notice the mismatch between v1beta1, the package name, and the Version in GroupVersion.
I discovered this when referencing the GroupVersion for a kind that did not, in fact, have a v1alpha1 implementation.
Hmm interesting problem, so, what issue does this actually cause when there's a mismatch? And what would you expect the behaviour to be here? How would you want to be told that this was wrong?
In our case, we'd grab onto the defined GVK (something like v1beta1.SomeGVK) to set before calling a k8s client/ reader. For us, SomeGVK did not actually have a v1alpha1 implementation so the error was somewhat easy to reason about. BUT if we did have a v1alpha1, I'd never known.
This is clearly not a showstopper but more of a papercut. It would be nice if the tooling yelled out at the developer when a mismatch like this exists. It can be static analysis too if it's easier to implement as opposed to incode reasoning.
This feels like something that is somewhat an opinion (a strong opinion I agree with), but not necessarily required. I could call my package anything, but call my API version something specific when I register it. AFAIK there's not actually any technical reason that the package name must be the correct version name, you can see that from the marker code you've pulled out above.
I wonder if this might be a better fit for the API linter that I'm currently working on. I'm already in discussion with sig apimachinery to make this project work for core kube types and eventually become a kubernetes-sigs project. A linter is convenient because we can make it configurable and allow folks to turn it off if they disagree with our strong opinions
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.