kuttl
kuttl copied to clipboard
Not all created objects are deleted when using a specific namespace
What happened: When test is done an object is still there
kgp -n n1
NAME READY STATUS RESTARTS AGE
pod1 1/1 Running 0 86s
What you expected to happen: I'd expect kuttl to delete all objects created during a test case, in my case pod1 and pod2, and not only pod2.
How to reproduce it (as minimally and precisely as possible): have the following structure multi-resources ├── 00-assert.yaml ├── 00-pod1.yaml ├── 01-assert.yaml └── 01-pod2.yaml
with the following files https://gist.github.com/cscetbon/3064597e8cb51af2d3efdd6b81a80b4c
Anything else we need to know?: Here are the logs from kuttl
kuttl test
=== RUN kuttl
harness.go:447: starting setup
harness.go:248: running tests using configured kubeconfig.
harness.go:275: Successful connection to cluster at: https://34.77.7.231
harness.go:343: running tests
harness.go:74: going to run test suite with timeout of 30 seconds for each step
harness.go:355: testsuite: . has 1 tests
=== RUN kuttl/harness
=== RUN kuttl/harness/multi-resources
=== PAUSE kuttl/harness/multi-resources
=== CONT kuttl/harness/multi-resources
logger.go:42: 23:20:21 | multi-resources | Ignoring .DS_Store as it does not match file name regexp: ^(\d+)-([^.]+)(.yaml)?$
logger.go:42: 23:20:21 | multi-resources | Creating namespace: kuttl-test-worthy-mastiff
logger.go:42: 23:20:21 | multi-resources/0-pod1 | starting test step 0-pod1
logger.go:42: 23:20:23 | multi-resources/0-pod1 | Pod:n1/pod1 created
logger.go:42: 23:20:25 | multi-resources/0-pod1 | test step completed 0-pod1
logger.go:42: 23:20:25 | multi-resources/1-pod2 | starting test step 1-pod2
logger.go:42: 23:20:27 | multi-resources/1-pod2 | Pod:n1/pod2 created
logger.go:42: 23:20:29 | multi-resources/1-pod2 | test step completed 1-pod2
logger.go:42: 23:20:30 | multi-resources | Failed to collect events for multi-resources in ns kuttl-test-worthy-mastiff: no matches for kind "Event" in version "events.k8s.io/v1beta1"
logger.go:42: 23:20:30 | multi-resources | Deleting namespace: kuttl-test-worthy-mastiff
=== CONT kuttl
harness.go:389: run tests finished
harness.go:495: cleaning up
harness.go:550: removing temp folder: ""
--- PASS: kuttl (12.74s)
--- PASS: kuttl/harness (0.00s)
--- PASS: kuttl/harness/multi-resources (9.33s)
PASS
Environment:
- Kubernetes version (use
kubectl version):
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-12T01:09:16Z", GoVersion:"go1.15.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.12-gke.300", GitCommit:"f9137674d7d28bc1e1197736ee45fea71e844873", GitTreeState:"clean", BuildDate:"2020-11-18T09:16:44Z", GoVersion:"go1.13.15b4", Compiler:"gc", Platform:"linux/amd64"}
- KUTTL version (use
kubectl kuttl version):
KUTTL Version: version.Info{GitVersion:"0.7.2", GitCommit:"e310360", BuildDate:"2020-11-03T21:33:24Z", GoVersion:"go1.15.3", Compiler:"gc", Platform:"darwin/amd64"}
- Cloud provider or hardware configuration: Google Cloud
I believe that kuttl cleans only the namespace in which the tests are run. Any reason why you're creating pod in a different namespace?
Because I can’t create namespace on the fly, I don’t have the permissions, so the idea is to have a set of existing namespaces that can be used exclusively for the tests with the exact same configuration on prod, meaning all the missing permissions etc... I still see kuttl creating namespaces on my local k8s but it uses the provided namespaces in my declared objects. They just get created and deleted with nothing inside as far as I can tell. I also see objects get deleted but not all created objects which doesn’t make sense to me. If it deleted some of them why not all of them.
I also run into the same case, only one resource of a test case is deleted, others left.
Logged in #397 as well. In our case, we need to test cluster-scoped resources as well. We honestly don't need kuttl to create any Namespaces on demand as we expect to hard-code those in the resource definitions themselves. We simply need a way for kuttl to clean up any resources we ask it to create, irrespective of the final test result (success or failure). The problem with a workaround is when the test fails, we cannot proceed to a final step which uses a Command declaration to manually remove the resources since there's no setting to continue on failure (only to ignore failure, which converts fail => success).