e2e-framework
e2e-framework copied to clipboard
Gracefully exit when setup fail
https://github.com/kubernetes-sigs/e2e-framework/blob/d19e8b569fb2f94568f8ed74b00227f879798aa0/pkg/env/env.go#L369
Due to the usage of klog.Fatalf(), it calls internally os.Exit(), and we can't gracefully run the clean ups.
@AlmogBaku thank you for pointing this out. Can you provide a bit more information or context so that an appropriate fix may be implemented.
@AlmogBaku if the workflow were to fail during the setup, don't you want to retain the setup intact instead of cleaning up ? This might help in digging into what actually caused the setup to fails easily right ?
There is an option similar to this called disable-graceful-teardown that we enabled a while ago that can prevent teardown on the test failure. Are you suggesting for an option similar to that but the one that can handle setup failures ?
Hi, I also hit this issue. envfuncs.LoadDockerImageToCluster() was failing in my setup function, which resulted in the kind cluster not being destroyed
@harshanarayana I went ahead and label this as a bug. The internal logger should not disrupt the execution flow of the users test.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I'd like to work on this issue. I'll try to reproduce the problem ofos.Exit after failed setup as my first step.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale