cluster-api-provider-aws
cluster-api-provider-aws copied to clipboard
make test does not terminate kube-apiserver and etcd processes
/kind bug
What steps did you take and what happened: forked repo and ran make test. After run kube-apiserver and etcd processes remain
What did you expect to happen: make test to clean up these processes
Environment:
Cluster-api-provider-aws version: v7.0 Kubernetes version: (use kubectl version): v1.19.2 OS (e.g. from /etc/os-release): Ubuntu 20.04
There is code to teardown: https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/e7d56364b37620241546835b67332ecf6be275bf/bootstrap/eks/controllers/suite_test.go
Do you have the source code in the GOPATH? How and where do you have etcd/api-server installed?
AS per #2752 I was using wrong directory but now have cloned my fork into $GOPATH/sigs.k8s.io/cluster-api-provider-aws
However still seeing processes left over
ps -ax -o pid,pgid,cmd | grep "/tmp/kubebuilder" | grep -v grep
1054517 1053483 /tmp/kubebuilder/bin/etcd --advertise-client-urls=http://127.0.0.1:35447 --data-dir=/tmp/k8s_test_framework_310196060 --listen-client-urls=http://127.0.0.1:35447 --listen-peer-urls=http://localhost:0
1054904 1053483 /tmp/kubebuilder/bin/kube-apiserver --allow-privileged=true --authorization-mode=RBAC --bind-address=127.0.0.1 --cert-dir=/tmp/k8s_test_framework_027252491 --client-ca-file=/tmp/k8s_test_framework_027252491/client-cert-auth-ca.crt --disable-admission-plugins=ServiceAccount --etcd-servers=http://127.0.0.1:35447 --insecure-port=0 --secure-port=38511 --service-account-issuer=https://127.0.0.1:38511/ --service-account-key-file=/tmp/k8s_test_framework_027252491/sa-signer.crt --service-account-signing-key-file=/tmp/k8s_test_framework_027252491/sa-signer.key --service-cluster-ip-range=10.0.0.0/24
1055191 1053483 /tmp/kubebuilder/bin/etcd --advertise-client-urls=http://127.0.0.1:35873 --data-dir=/tmp/k8s_test_framework_439884657 --listen-client-urls=http://127.0.0.1:35873 --listen-peer-urls=http://localhost:0
1055287 1053483 /tmp/kubebuilder/bin/etcd --advertise-client-urls=http://127.0.0.1:36017 --data-dir=/tmp/k8s_test_framework_224035017 --listen-client-urls=http://127.0.0.1:36017 --listen-peer-urls=http://localhost:0
1055386 1053483 /tmp/kubebuilder/bin/kube-apiserver --allow-privileged=true --authorization-mode=RBAC --bind-address=127.0.0.1 --cert-dir=/tmp/k8s_test_framework_796444188 --client-ca-file=/tmp/k8s_test_framework_796444188/client-cert-auth-ca.crt --disable-admission-plugins=ServiceAccount --etcd-servers=http://127.0.0.1:35873 --insecure-port=0 --secure-port=43193 --service-account-issuer=https://127.0.0.1:43193/ --service-account-key-file=/tmp/k8s_test_framework_796444188/sa-signer.crt --service-account-signing-key-file=/tmp/k8s_test_framework_796444188/sa-signer.key --service-cluster-ip-range=10.0.0.0/24
1055519 1053483 /tmp/kubebuilder/bin/kube-apiserver --allow-privileged=true --authorization-mode=RBAC --bind-address=127.0.0.1 --cert-dir=/tmp/k8s_test_framework_199601556 --client-ca-file=/tmp/k8s_test_framework_199601556/client-cert-auth-ca.crt --disable-admission-plugins=ServiceAccount --etcd-servers=http://127.0.0.1:36017 --insecure-port=0 --secure-port=45401 --service-account-issuer=https://127.0.0.1:45401/ --service-account-key-file=/tmp/k8s_test_framework_199601556/sa-signer.crt --service-account-signing-key-file=/tmp/k8s_test_framework_199601556/sa-signer.key --service-cluster-ip-range=10.0.0.0/24
I have etcd and kube-apiserver binaries in /usr/local/bin but it seems to be installing them in /tmp/kubebuilder/bin and using them.
go test -v ./...
fetching tools
kubebuilder/
kubebuilder/bin/
kubebuilder/bin/etcd
kubebuilder/bin/kubectl
kubebuilder/bin/kube-apiserver
setting up env vars
Yeah, this has always happened to me too, and i've had to killall -9 etcd
separately etc...
@sbueringer has been touching some of the envtest stuff in the core CAPI repo so I don't know if there's something better we could be doing here.
Not sure. fyi an issue in the main repo which could be related: https://github.com/kubernetes-sigs/cluster-api/issues/4278
Apart from that: My recent changes where mostly about centralizing envtest setup code (https://github.com/kubernetes-sigs/cluster-api/blob/3e54e8c939090c97a718a553ee6dce5b4c054731/internal/envtest/environment.go#L95-L124) and making it possible to run integration tests with a local kind cluster (i.e. not using envtest at all): https://github.com/kubernetes-sigs/cluster-api/pull/5102
Yup, I think https://github.com/kubernetes-sigs/cluster-api/issues/4278 is pretty much the same as I get.
/priority important-longterm /area testing
/triage accepted /milestone backlog
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I also got hit by this couple of times to the point my machine was very slow due to 10+ instances of api server and etcd running.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted
(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten