external-snapshotter
external-snapshotter copied to clipboard
Shortnames conflict
What happened: No resources found in default namespace.
What you expected to happen: List of VirtualService
How to reproduce it: kubectl get vs
Anything else we need to know?:
kubectl api-resources | grep vs virtualservices vs networking.istio.io/v1beta1 true VirtualService volumesnapshotclasses vsclass,vsclasses snapshot.storage.k8s.io/v1 false VolumeSnapshotClass volumesnapshotcontents vsc,vscs snapshot.storage.k8s.io/v1 false VolumeSnapshotContent volumesnapshots vs snapshot.storage.k8s.io/v1 true VolumeSnapshot
Environment:
- Driver version: controller-gen.kubebuilder.io/version: v0.4.0
- Kubernetes version (use
kubectl version
): v1.21.9 or v1.22.6 - OS (e.g. from /etc/os-release):
- Kernel (e.g.
uname -a
): Linux cent7 3.10.0-957.el7.x86_64 - Install tools: kubectl v1.24.0
- Others:
shortNames:
- vs
Unfortunately, there isn't much that can be done about this.
CRD short names aren't reserved, so any community CRD short name can conflict with another.
As a workaround, you can start setting up your cluster(s) with Istio CRDs first and install the snapshot CRDs second. This will allow Istio CRDs to claim the short name.
Unfortunately, there isn't much that can be done about this.
CRD short names aren't reserved, so any community CRD short name can conflict with another.
As a workaround, you can start setting up your cluster(s) with Istio CRDs first and install the snapshot CRDs second. This will allow Istio CRDs to claim the short name.
Or you can rename vs to vsn for example.
I'm assuming we wouldn't want to rename vs
to vsn
as that isn't backwards compatible, but will defer to @xing-yang .
I'm assuming we wouldn't want to rename vs to vsn as that isn't backwards compatible, but will defer to @xing-yang .
Right, we don't want to make backward incompatible changes.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.