should `helm_resource` always use `k8s_context()`?
It feels like helm_resource should respect k8s_context() by passing --kube-context={k8s_context()} (pseudocode) to helm. I've got a naive, untested implementation here, but I'm guessing there might be a better, more maintainable solution?
i believe the way it currently works is that tilt generates a custom kubeconfig for subprocesses to use with only one context, sets the KUBECONFIG environment variable, and helm respects that. No need to pass --kube-context directly
There doesn't appear to be any bug report here, so I'm going to close this issue. Let me know if I missed something!
Update: this definitely doesn't work as intended, at least for the delete_cmd part of helm_resource.
Can we reopen this and #598 ?
Sure, can you add more details on what you tried, what you expected to happen, and what happened instead?
for what it's worth, here are the repro steps i tried:
- Created two kubectl contexts -- kind, and a dummy kubectl context connected to a dead cluster.
kubectl config use-context kind-kind- Ran
tilt upin https://github.com/tilt-dev/tilt-extensions/tree/master/helm_resource/test kubectl config use-context dummytilt trigger helloworld- Confirmed that the helm chart was deleted and re-installed to the kind cluster successfully
Instead, for 2, kubectl config use-context dummy, then 3 tilt up --context kind-kind
(thx for trying to repro btw :) )
we received a bug report about kube contexts not resolving correctly sometimes if you were using docker compose and kubernetes in the same tiltfile - https://github.com/tilt-dev/tilt/issues/6453
was that possibly the issue here?
I was building images with docker, but no docker compose
If you see this comment, the issue is tilt's --context not being respected by helm_resource
(obvs when you get a chance), can you try
- Create two kubectl contexts -- kind, and a dummy kubectl context connected to a dead cluster.
kubectl config use-context dummytilt up --context kind-kindin https://github.com/tilt-dev/tilt-extensions/tree/master/helm_resource/test- Check if tilt installs into the kind cluster or errors trying to install into the dead cluster
sure, here's what i tried: https://github.com/tilt-dev/tilt-extensions/pull/624
seems to work correctly: https://app.circleci.com/pipelines/github/tilt-dev/tilt-extensions/1726/workflows/0f14996e-bca2-4753-8fb4-dbd6b96b1b4a/jobs/4203
Throwing my 👍 in here as well, as we are onboarding to tilt I discovered a number of commands that unexpectedly default to the locally active kubectl context. No data loss yet, fortunately, but a lot of corrupted data on both local and testing clusters due to commands reading from and writing to different contexts.
I've been doing the following in our Tiltfile's, combined with a lint rule that ensures exactly two instances of bare kubectl (one for the comment, the other for the format string):
def kubectl_cmd(*args):
return 'kubectl --context {ctx} {args}'.format(
ctx=k8s_context(),
args=' '.join([shlex. quote(str(x)) for x in args]),
)
I'm happy to PR a fix to this if the Tilt team can offer guidance on how best to contribute. It looks like there's a repro case in a comment above, but that's from nearly a year ago at this point.
Ultimately, thank you for tilt!
This ended up becoming a blocker for us, so I opened #647.
For others who need it, feel free to give it a shot:
v1alpha1.extension_repo(name='default', url='https://github.com/wandercom/tilt-extensions')
v1alpha1.extension(name='helm_resource', repo_name='default', repo_path='helm_resource')
load('ext://helm_resource', 'helm_resource', 'helm_repo')
v1alpha1.extension(name='helm_remote', repo_name='default', repo_path='helm_remote')
load('ext://helm_remote', 'helm_remote')
@blast-hardcheese hmm...i think you're confused. the only repro case in this issue is https://github.com/tilt-dev/tilt-extensions/pull/624 , which demonstrated that tilt is currently working as expected.
if you have a repro case, can you post it? i'd expect a repro case to include:
- a full tiltfile
- a script to run it
- a test that demonstrates that the actual behavior does not match the expected behavior
such that it can be run in ci
@nicks I am surely confused! Even attempting to verify what you said in the closing PR comment seems difficult to verify locally, I'd appreciate some guidance:
local("echo KUBECONFIG: '-->' $KUBECONFIG '<--'")
local("env | grep -i kube || true")
yields the following:
local: echo KUBECONFIG: '-->' $KUBECONFIG '<--'
→ KUBECONFIG: --> <--
local: env | grep -i kube || true
→ [no output]
Successfully loaded Tiltfile (21.308291ms)
I have verified that helm uses KUBECONFIG (despite their --help stating they require HELM_KUBECONFIG)...
$ cp -rf ~/.kube /tmp/kube-ephemeral
$ kubectl config set-cluster bug-report --server=https://127.0.0.1:10
$ kubectl config set-context bug-report --cluster=bug-report
$ kubectl config use-context bug-report
$ KUBECONFIG=/tmp/kube-ephemeral/config helm upgrade --install typesense-operator tyko/typesense-operator
Release "typesense-operator" does not exist. Installing it now.
which is indeed excellent. I just can't understand how it works, considering the above echo and env not producing what I expected.
are you using helm_resource? i think it uses local_resource, not local()
Wild! local_resource certainly does operate different to local, I wouldn't have expected that at all. Maybe the resolution here is actually to document this behavior 🙂
How best can we help with that?
Thank you!