cloud-provider-openstack
cloud-provider-openstack copied to clipboard
'unknown revision v0.0.0' errors caused by the 'k8s.io/kubernetes' dependency
Is this a BUG REPORT or FEATURE REQUEST?: /kind bug
What happened:
When executing go list -m all multiple dependencies can not be resolved due to invalid versions. This command (with some additional flags) is used by JetBrain's GoLand to sync the dependencies and since the command fails, the IDE is not capable to resolve any dependency which results in a huge pile of errors per file.
What you expected to happen: That the command completes successfully and prints out all modules.
How to reproduce it:
Clone the repository and execute the command go list -m all. This will produce the following errors:
go: k8s.io/[email protected]: invalid version: unknown revision v0.0.0
go: k8s.io/[email protected]: invalid version: unknown revision v0.0.0
go: k8s.io/[email protected]: invalid version: unknown revision v0.0.0
go: k8s.io/[email protected]: invalid version: unknown revision v0.0.0
go: k8s.io/[email protected]: invalid version: unknown revision v0.0.0
go: k8s.io/[email protected]: invalid version: unknown revision v0.0.0
go: k8s.io/[email protected]: invalid version: unknown revision v0.0.0
go: k8s.io/[email protected]: invalid version: unknown revision v0.0.0
go: k8s.io/[email protected]: invalid version: unknown revision v0.0.0
Anything else we need to know?:
The problem seems to be the dependency k8s.io/kubernetes. As described here (and discussed in more detail here) k8s.io/kubernetes should not be used as a dependency. Removing the dependency from the go.mod and executing the command again will result in a proper list of the modules (and the IDE is able to resolve the dependencies), but building the code will fail since some files depend on k8s.io/kubernetes (mostly tests).
This issue may be related to #1633 and #347
Environment:
- openstack-cloud-controller-manager(or other related binary) version:
- OpenStack version:
- Others:
running a go mod tidy brings the k8s.io/kubernetes dependency back, since OCCM test code relies on the k8s.io/kubernetes/test/e2e/... packages.
Why do you need to run a go list -m all command?
UPD: as for the regular code, we initialize k8s feature gates in the OCCM cmd: https://github.com/kubernetes/cloud-provider-openstack/blob/8a156e543ca44924a5f26aaf001fb86bcbd100f9/cmd/openstack-cloud-controller-manager/main.go#L39
https://github.com/kubernetes/kubernetes/blob/55f2bc10435160619d1ece8de49a1c0f8fcdf276/pkg/features/kube_features.go#L980-L982
I do not need the command, but as I wrote JetBrain's GoLand executes the command to resolve the dependencies. Since the command fails, the error indicators on the right within a source file look like the red carpet since the IDE can not resolve the imports.
I know where the the k8s.io/kubernetes dependency is used, but I do not know how to replace it since the list provided at the kubernetes repo does not include any of the imported modules as far as I can tell (I usually do not program in go and I am new to the source code of k8s).
running a
go mod tidybrings thek8s.io/kubernetesdependency back, since OCCM test code relies on thek8s.io/kubernetes/test/e2e/...packages. Why do you need to run ago list -m allcommand?
This command is automatically issued by Golang each time repo gets updated, I actually suffer from this too, just was ignoring this. @ProbstDJakob, you can safely ignore it BTW, I assure you. Do go mod vendor manually instead.
So seems like in order to use the K8s testing framework you need to indeed import whole k8s.io/kubernetes. Azure simply doesn't use the framework and GCP doesn't seem to have such test and Alibaba Cloud imports it as we do: https://github.com/kubernetes/cloud-provider-alibaba-cloud/blob/master/go.mod#L32
I see AWS provider works around this by having a separate go.mod for tests: https://github.com/kubernetes/cloud-provider-aws/blob/master/tests/e2e/go.mod
@ProbstDJakob: Would you want to try implementing AWS approach?
UPD: as for the regular code, we initialize k8s feature gates in the OCCM cmd:
https://github.com/kubernetes/cloud-provider-openstack/blob/8a156e543ca44924a5f26aaf001fb86bcbd100f9/cmd/openstack-cloud-controller-manager/main.go#L39
https://github.com/kubernetes/kubernetes/blob/55f2bc10435160619d1ece8de49a1c0f8fcdf276/pkg/features/kube_features.go#L980-L982
Why do we even do that? I don't see that pattern in other cloud providers. Maybe this one can be removed.
I will look into it next week and open a PR, but what about the line in the occm/main.go? Is it safe to remove it or how should I handle that?
The #2486 should fix this issue
Why do we even do that? I don't see that pattern in other cloud providers. Maybe this one can be removed.
Is it safe to remove it or how should I handle that?
I don't know what does it do. It was added in https://github.com/kubernetes/cloud-provider-openstack/pull/234/commits/0c8a77402a87b7abd8d0173ec229966c09e67064#diff-c19a66ed1fb964264d7bd09434b6bdbb184644ff0e64154c88d69e10acc26f97R35 If someone has time to investigate this code line, please go ahead.
Why do we even do that? I don't see that pattern in other cloud providers. Maybe this one can be removed.
Is it safe to remove it or how should I handle that?
I don't know what does it do. It was added in 0c8a774#diff-c19a66ed1fb964264d7bd09434b6bdbb184644ff0e64154c88d69e10acc26f97R35 If someone has time to investigate this code line, please go ahead.
@dims, maybe you have a clue why it was done?
@kayrus: Is this fixed…?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Sorry for the late response. Yes it seems to work.