hierarchical-namespaces
hierarchical-namespaces copied to clipboard
Fix flaky quickstart test
I've noticed that the following e2e test frequently fails, and always in the same way. Rerunning it generally passes. It's almost certainly just a flake.
• Failure [66.126 seconds]
Quickstart
/usr/local/google/home/aludwin/git/hierarchical-namespaces/test/e2e/quickstart_test.go:11
Should propagate different types [It]
/usr/local/google/home/aludwin/git/hierarchical-namespaces/test/e2e/quickstart_test.go:81
Timed out after 2.119s.
Command: [kubectl -n service-1 get secrets]
included the undesired output "my-creds":
NAME TYPE DATA AGE
default-token-9w45m kubernetes.io/service-account-token 3 15s
my-creds Opaque 1 7s
/usr/local/google/home/aludwin/git/hierarchical-namespaces/test/e2e/quickstart_test.go:102
------------------------------
[1648589350] Running: [kubectl get ns -o custom-columns=:.metadata.name --no-headers=true -l hnc.x-k8s.io/testNamespace=true]
[1648589351] Running: [kubectl get ns ]
[1648589351] Running: [kubectl hns config set-resource secrets --mode Ignore]
Output (passed):
[1648589352] Running: [kubectl create ns acme-org]
Output (passed): namespace/acme-org created
[1648589353] Running: [kubectl label --overwrite ns acme-org hnc.x-k8s.io/testNamespace=true]
Output (passed): namespace/acme-org labeled
[1648589354] Running: [kubectl hns create team-a -n acme-org]
Output (passed): Successfully created "team-a" subnamespace anchor in "acme-org" namespace
[1648589355] Running: [kubectl label --overwrite ns team-a hnc.x-k8s.io/testNamespace=true]
Output (passed): namespace/team-a labeled
[1648589356] Running: [kubectl hns create team-b -n acme-org]
Output (passed): Successfully created "team-b" subnamespace anchor in "acme-org" namespace
[1648589358] Running: [kubectl label --overwrite ns team-b hnc.x-k8s.io/testNamespace=true]
Output (passed): namespace/team-b labeled
[1648589359] Running: [kubectl create ns service-1]
Output (passed): namespace/service-1 created
[1648589360] Running: [kubectl label --overwrite ns service-1 hnc.x-k8s.io/testNamespace=true]
Output (passed): namespace/service-1 labeled
[1648589361] Running: [kubectl hns set service-1 --parent team-b]
Output (passed): Setting the parent of service-1 to team-b
Succesfully updated 1 property of the hierarchical configuration of service-1
[1648589362] Running: [kubectl -n team-b create secret generic my-creds --from-literal=password=iamteamb]
Output (passed): secret/my-creds created
[1648589366] Running: [kubectl -n service-1 get secrets]
[1648589367] Running: [kubectl hns config set-resource secrets --mode Propagate --force]
Output (passed):
[1648589368] Running: [kubectl get hncconfiguration config -oyaml]
Output (passed): apiVersion: hnc.x-k8s.io/v1alpha2
kind: HNCConfiguration
metadata:
creationTimestamp: "2022-03-29T21:18:15Z"
generation: 55
name: config
resourceVersion: "375005"
uid: 2d882728-f583-4eb4-8a47-d05252fe2576
spec:
resources:
- mode: Propagate
resource: secrets
status:
resources:
- group: ""
mode: Propagate
numPropagatedObjects: 1
numSourceObjects: 6
resource: secrets
version: v1
- group: rbac.authorization.k8s.io
mode: Propagate
numPropagatedObjects: 3
numSourceObjects: 0
resource: rolebindings
version: v1
- group: rbac.authorization.k8s.io
mode: Propagate
numPropagatedObjects: 1
numSourceObjects: 0
resource: roles
version: v1
[1648589369] Running: [kubectl -n service-1 get secrets]
Output: NAME TYPE DATA AGE
default-token-9w45m kubernetes.io/service-account-token 3 10s
my-creds Opaque 1 2s
[1648589370] Running: [kubectl hns set service-1 --parent team-a]
Output (passed): Changing the parent of service-1 from team-b to team-a
Succesfully updated 1 property of the hierarchical configuration of service-1
[1648589371] Running: [kubectl hns describe team-a]
Output: Hierarchy configuration for namespace team-a
Parent: acme-org
Children:
- service-1
No conditions
No recent HNC events for objects in this namespace
[1648589373] Running: [kubectl -n service-1 get secrets]
[1648589374] Running: [kubectl -n service-1 get secrets]
[1648589375] Running: [kubectl get ns -o custom-columns=:.metadata.name --no-headers=true -l hnc.x-k8s.io/testNamespace=true]
[1648589376] Running: [kubectl get ns acme-org]
[1648589377] Running: [kubectl annotate ns acme-org hnc.x-k8s.io/subnamespace-of-]
Output (passed): namespace/acme-org annotated
[1648589378] Running: [kubectl get ns service-1]
[1648589379] Running: [kubectl annotate ns service-1 hnc.x-k8s.io/subnamespace-of-]
Output (passed): namespace/service-1 annotated
[1648589380] Running: [kubectl get ns team-a]
[1648589381] Running: [kubectl annotate ns team-a hnc.x-k8s.io/subnamespace-of-]
Output (passed): namespace/team-a annotated
[1648589382] Running: [kubectl get ns team-b]
[1648589383] Running: [kubectl annotate ns team-b hnc.x-k8s.io/subnamespace-of-]
Output (passed): namespace/team-b annotated
[1648589384] Running: [kubectl get ns ]
[1648589385] Running: [kubectl delete ns acme-org]
Output (passed): namespace "acme-org" deleted
[1648589396] Running: [kubectl delete ns service-1]
Output (passed): namespace "service-1" deleted
[1648589403] Running: [kubectl delete ns team-a]
Output (passed): namespace "team-a" deleted
[1648589409] Running: [kubectl delete ns team-b]
Output (passed): namespace "team-b" deleted
HNC seems to get into a bad state which causes the flakes. After I saw this problem as part of a full suite, I reran this test alone and it failed on 4/4 times. Then I killed the pod and it passed 4/4 times. I consistently saw errors from a nonexistent Kind (another test tries to sync nonexistent resources) which may or may not be related.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.