krew
krew copied to clipboard
Proposal: kubectl krew list should not require a plugin index
When developing, or installing plugins as an archive. Doing a kubectl krew list should show all installed plugins regardless of the default plugin index not existing, and shouldn't attempt to force you to use it.
could you show the commands you ran that failed where you think they should succeed? i believe we allow krew to run without a default index (although krew-index ships as the default so it would need to be removed after the fact with kubectl krew index remove default)
kubectl krew list
comes back with
krew local plugin index is not initialized (run kubectl krew update)
If you run this above then it works as designed but again requires the default plugin index.
If you remove the default index with kubectl krew index remove default or by deleting the default folder in the krew install it comes back with
krew local plugin index is not initialized (run kubectl krew update)
The only work around I've found so far is to git init in the empty default folder. Then kubectl krew list returns what is installed without setting the index.
I can repro it this way:
kubectl krew install tree whoami
kubectl krew index remove default --force
# no indexes exist at this point
kubectl krew list
# fails with error: krew local plugin index is not initialized (run "kubectl krew update")
# even though some plugins are installed
however if you just add a random index, it starts working again:
kubectl krew index add other https://github.com/ahmetb/krew-index.git
kubectl krew list
# (shows previously installed plugins)
I don't know why we have this pre-run check in list command but it might be handling an edge case we forgot about at this point.
The part I don't understand is why would someone try to use krew without any indexes. Can you clarify @CarlosEsco?
@ahmetb We are in an air gapped cluster without git repo access internally or externally and have to side load plugins with the --manifest --archive options.
As for the pre-run check, i feel it should maybe warn, but continue at worst. Instead of completely failing, or requires an empty git repository.
i feel like in that scenario you lose a lot of the benefits of krew since youre basically just manually installing binaries. i think you would only really have uninstall and list as the main usable functions and you can already do kubectl plugin list to somewhat make up for kubectl krew list (although it will only list what kubectl-* binaries are on your path).
Fwiw installing plugins with --manifest/--archive was only meant for development use (to plugin authors) and not quite for what you are doing. In your case, it might be wiser to just ship the binary and add to PATH instead of using Krew.
I agree though the check doesn’t do anything useful here and we can remove it.
Agree with both your points, we just found it easier for users managing the cluster to be able to see what is installed for kubernetes and to use them as "native" extensions vs having the binaries themselves. We are also able to reuse the ansible plays pointing to the index for the non air gapped system.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.