terratest
terratest copied to clipboard
Excessive logging when using k8s client
In some scenarios (for example when waiting for pods to be available) there are a lot of unnecessary logs which cannot be discarded since there is no support for discarding logs for the k8s module. The culprit I am referring to is here: https://github.com/gruntwork-io/terratest/blob/f4f2459dafaf57deaed2d91952e76ae5ffded9e6/modules/k8s/client.go#L42
To give you an example, when our team is testing our elastic cluster, we first wait for all the replicas to be available k8s.WaitUntilPodAvailable(t, options, esOperatorPodName, retries, sleep) k8s.WaitUntilPodAvailable(t, options, esClusterPodName, retries, sleep) k8s.WaitUntilPodAvailable(t, options, esKibanaPodName, retries, sleep)
I propose a change (see below output) where this 'configuring Kubernetes client using config file /root/.kube/config with context ' message is only logged when the kubectl options are created, or alternatively that we can have some kind of options for loglevel to avoid these kind of messages. I know it seems trivial, but we have a lot of parallel tests that run and in some scenarios we can have thousands of these, making it hard to see what is going on in our tests.
... TestElasticCluster 2022-01-14T13:44:54Z retry.go:91: Wait for pod elasticsearch-es-default-0 to be provisioned. TestElasticCluster 2022-01-14T13:44:54Z client.go:42: Configuring Kubernetes client using config file /root/.kube/config with context TestElasticCluster 2022-01-14T13:44:54Z retry.go:103: Wait for pod elasticsearch-es-default-0 to be provisioned. returned an error: Pod elasticsearch-es-default-0 is not available. Sleeping for 5s and will try again. TestElasticCluster 2022-01-14T13:44:59Z retry.go:91: Wait for pod elasticsearch-es-default-0 to be provisioned. TestElasticCluster 2022-01-14T13:44:59Z client.go:42: Configuring Kubernetes client using config file /root/.kube/config with context TestElasticCluster 2022-01-14T13:44:59Z retry.go:103: Wait for pod elasticsearch-es-default-0 to be provisioned. returned an error: Pod elasticsearch-es-default-0 is not available. Sleeping for 5s and will try again. TestElasticCluster 2022-01-14T13:45:04Z retry.go:91: Wait for pod elasticsearch-es-default-0 to be provisioned. TestElasticCluster 2022-01-14T13:45:04Z client.go:42: Configuring Kubernetes client using config file /root/.kube/config with context TestElasticCluster 2022-01-14T13:45:04Z retry.go:103: Wait for pod elasticsearch-es-default-0 to be provisioned. returned an error: Pod elasticsearch-es-default-0 is not available. Sleeping for 5s and will try again. TestElasticCluster 2022-01-14T13:45:09Z retry.go:91: Wait for pod elasticsearch-es-default-0 to be provisioned. TestElasticCluster 2022-01-14T13:45:09Z client.go:42: Configuring Kubernetes client using config file /root/.kube/config with context TestElasticCluster 2022-01-14T13:45:09Z retry.go:103: Wait for pod elasticsearch-es-default-0 to be provisioned. returned an error: Pod elasticsearch-es-default-0 is not available. Sleeping for 5s and will try again. ....
I also encountered this problem in the Apache APISIX Ingress controller project[1], a large number of useless logs make it difficult for us to get useful information from it
- https://github.com/apache/apisix-ingress-controller/
thanks @lingsamuel for the PR, when should we expect it to be merged ?
Any news on this issue?