Add support for selecting kind cluster node provider
What would you like to be added:
I would like for kuttl to manage spinning up and tearing down a kind cluster. I am on macOS using podman (machine) in rootless mode. After following these instructions, I am able to create a new kind cluster using
> KIND_EXPERIMENTAL_PROVIDER=podman kind create cluster
However kuttl fails to do so, even when I've provided the same environment variable.
% env KIND_EXPERIMENTAL_PROVIDER=podman k kuttl test
=== RUN kuttl
harness.go:463: starting setup
harness.go:252: running tests with KIND.
harness.go:176: temp folder created /var/folders/91/ns87pdzn6c145ps1x_bfr9gc0000gq/T/kuttl2683441861
harness.go:158: Starting KIND cluster
harness.go:514: cleaning up
harness.go:523: collecting cluster logs to kind-logs-1733397185
harness.go:571: removing temp folder: "/var/folders/91/ns87pdzn6c145ps1x_bfr9gc0000gq/T/kuttl2683441861"
harness.go:577: tearing down kind cluster
harness.go:596: fatal error getting client: running kind with rootless provider requires setting systemd property "Delegate=yes", see https://kind.sigs.k8s.io/docs/user/rootless/
--- FAIL: kuttl (0.49s)
FAIL
Why is this needed: Oh. Maybe this should be filed as a bug instead?
% kubectl-kuttl --version
kubectl-kuttl version 0.20.0
% podman --version
podman version 5.3.1
Here is how kind creates a cluster provider (uses this code which checks the env var you mentioned).
Here is how kuttl does it.
Shouldn't be hard to add support for podman, but I'm not sure if using an env var with KIND and EXPERIMENTAL would be appropriate 🤔
WDYT @ndimiduk ?
I hoped this was a simple issue of not passing the env through. I guess kuttl replicates more of what kind cli does than I expected.
As for whether or not to support another project's experimental feature, I cannot say.
Looking some more, the API for explicitly choosing the provider is public and not marked experimental. So I think we should just add a kindClusterNodeProvider option to TestSuite that defaults to empty (unset).
I probably won't have time to do this anytime soon, but will be happy to review a PR!