cluster-api-provider-aws icon indicating copy to clipboard operation
cluster-api-provider-aws copied to clipboard

make test does not terminate kube-apiserver and etcd processes

Open nab-gha opened this issue 3 years ago • 19 comments

/kind bug

What steps did you take and what happened: forked repo and ran make test. After run kube-apiserver and etcd processes remain

What did you expect to happen: make test to clean up these processes

Environment:

Cluster-api-provider-aws version: v7.0 Kubernetes version: (use kubectl version): v1.19.2 OS (e.g. from /etc/os-release): Ubuntu 20.04

nab-gha avatar Sep 10 '21 07:09 nab-gha

There is code to teardown: https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/e7d56364b37620241546835b67332ecf6be275bf/bootstrap/eks/controllers/suite_test.go

Do you have the source code in the GOPATH? How and where do you have etcd/api-server installed?

richardcase avatar Sep 10 '21 07:09 richardcase

AS per #2752 I was using wrong directory but now have cloned my fork into $GOPATH/sigs.k8s.io/cluster-api-provider-aws

However still seeing processes left over

ps -ax -o pid,pgid,cmd | grep "/tmp/kubebuilder"  | grep -v grep
1054517 1053483 /tmp/kubebuilder/bin/etcd --advertise-client-urls=http://127.0.0.1:35447 --data-dir=/tmp/k8s_test_framework_310196060 --listen-client-urls=http://127.0.0.1:35447 --listen-peer-urls=http://localhost:0
1054904 1053483 /tmp/kubebuilder/bin/kube-apiserver --allow-privileged=true --authorization-mode=RBAC --bind-address=127.0.0.1 --cert-dir=/tmp/k8s_test_framework_027252491 --client-ca-file=/tmp/k8s_test_framework_027252491/client-cert-auth-ca.crt --disable-admission-plugins=ServiceAccount --etcd-servers=http://127.0.0.1:35447 --insecure-port=0 --secure-port=38511 --service-account-issuer=https://127.0.0.1:38511/ --service-account-key-file=/tmp/k8s_test_framework_027252491/sa-signer.crt --service-account-signing-key-file=/tmp/k8s_test_framework_027252491/sa-signer.key --service-cluster-ip-range=10.0.0.0/24
1055191 1053483 /tmp/kubebuilder/bin/etcd --advertise-client-urls=http://127.0.0.1:35873 --data-dir=/tmp/k8s_test_framework_439884657 --listen-client-urls=http://127.0.0.1:35873 --listen-peer-urls=http://localhost:0
1055287 1053483 /tmp/kubebuilder/bin/etcd --advertise-client-urls=http://127.0.0.1:36017 --data-dir=/tmp/k8s_test_framework_224035017 --listen-client-urls=http://127.0.0.1:36017 --listen-peer-urls=http://localhost:0
1055386 1053483 /tmp/kubebuilder/bin/kube-apiserver --allow-privileged=true --authorization-mode=RBAC --bind-address=127.0.0.1 --cert-dir=/tmp/k8s_test_framework_796444188 --client-ca-file=/tmp/k8s_test_framework_796444188/client-cert-auth-ca.crt --disable-admission-plugins=ServiceAccount --etcd-servers=http://127.0.0.1:35873 --insecure-port=0 --secure-port=43193 --service-account-issuer=https://127.0.0.1:43193/ --service-account-key-file=/tmp/k8s_test_framework_796444188/sa-signer.crt --service-account-signing-key-file=/tmp/k8s_test_framework_796444188/sa-signer.key --service-cluster-ip-range=10.0.0.0/24
1055519 1053483 /tmp/kubebuilder/bin/kube-apiserver --allow-privileged=true --authorization-mode=RBAC --bind-address=127.0.0.1 --cert-dir=/tmp/k8s_test_framework_199601556 --client-ca-file=/tmp/k8s_test_framework_199601556/client-cert-auth-ca.crt --disable-admission-plugins=ServiceAccount --etcd-servers=http://127.0.0.1:36017 --insecure-port=0 --secure-port=45401 --service-account-issuer=https://127.0.0.1:45401/ --service-account-key-file=/tmp/k8s_test_framework_199601556/sa-signer.crt --service-account-signing-key-file=/tmp/k8s_test_framework_199601556/sa-signer.key --service-cluster-ip-range=10.0.0.0/24

I have etcd and kube-apiserver binaries in /usr/local/bin but it seems to be installing them in /tmp/kubebuilder/bin and using them.

go test -v ./...
fetching tools
kubebuilder/
kubebuilder/bin/
kubebuilder/bin/etcd
kubebuilder/bin/kubectl
kubebuilder/bin/kube-apiserver
setting up env vars

nab-gha avatar Sep 10 '21 08:09 nab-gha

Yeah, this has always happened to me too, and i've had to killall -9 etcd separately etc...

@sbueringer has been touching some of the envtest stuff in the core CAPI repo so I don't know if there's something better we could be doing here.

randomvariable avatar Sep 13 '21 12:09 randomvariable

Not sure. fyi an issue in the main repo which could be related: https://github.com/kubernetes-sigs/cluster-api/issues/4278

Apart from that: My recent changes where mostly about centralizing envtest setup code (https://github.com/kubernetes-sigs/cluster-api/blob/3e54e8c939090c97a718a553ee6dce5b4c054731/internal/envtest/environment.go#L95-L124) and making it possible to run integration tests with a local kind cluster (i.e. not using envtest at all): https://github.com/kubernetes-sigs/cluster-api/pull/5102

sbueringer avatar Sep 13 '21 13:09 sbueringer

Yup, I think https://github.com/kubernetes-sigs/cluster-api/issues/4278 is pretty much the same as I get.

randomvariable avatar Sep 13 '21 13:09 randomvariable

/priority important-longterm /area testing

randomvariable avatar Sep 14 '21 10:09 randomvariable

/triage accepted /milestone backlog

richardcase avatar Nov 29 '21 18:11 richardcase

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 27 '22 19:02 k8s-triage-robot

/remove-lifecycle stale

sedefsavas avatar Mar 01 '22 08:03 sedefsavas

I also got hit by this couple of times to the point my machine was very slow due to 10+ instances of api server and etcd running.

invidian avatar Mar 25 '22 09:03 invidian

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jun 23 '22 09:06 k8s-triage-robot

/remove-lifecycle stale

invidian avatar Jun 23 '22 11:06 invidian

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 21 '22 11:09 k8s-triage-robot

/remove-lifecycle stale

invidian avatar Sep 21 '22 14:09 invidian

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Dec 20 '22 15:12 k8s-triage-robot

/remove-lifecycle stale

invidian avatar Dec 22 '22 18:12 invidian

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

k8s-triage-robot avatar Jan 20 '24 00:01 k8s-triage-robot

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 19 '24 00:04 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar May 19 '24 00:05 k8s-triage-robot