prow
prow copied to clipboard
Race detector, race conditions
So we have some issues with -race detector. I'm not sure where it's a bug or just a question
So when we have 2+ PR open, one of the PR usually fails with issues similar like that https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/kubernetes-sigs_external-dns/5355/pull-external-dns-unit-test/1919659860333432832
Same GitHub action tests executes on re-run without any issues https://github.com/kubernetes-sigs/external-dns/actions/runs/14854342690/job/41703973284?pr=5355
Is it possible that there is no isolation in between tests for same projects?
The main complexity is, as we could not reproduce this on our machines or with Github runners.
re-running tests with /test pull-external-dns-unit-test when no other PR executing it, works without issue.
Example PR
- https://github.com/kubernetes-sigs/external-dns/pull/5355
- https://github.com/kubernetes-sigs/external-dns/pull/5354#issuecomment-2853632254
I could share more, this is a bit annoying that we need to w8 and run one-by-one
Maybe not related to prow architecture, I'm not sure
Very weird. Separate executions of even a single job are just Pods, hard to imagine how they could interfere in this way. It's hard to imagine how the tests could interfere like this even if the executions were not containerized?
It could be something else. Can't find why it works on Github Action and behavior differ in Prow
I'm not sure if it is the root cause but the external DNS tests use the kubekins-e2e image but it is deprecated in favor of kubekins-e2e-v2
We're using Go 1.24 in external DNS but the test images are 1.13.5
1.13.5 is a placeholder
The actual versions are in https://github.com/kubernetes/test-infra/blob/master/images/kubekins-e2e-v2/variants.yaml
Very weird. Separate executions of even a single job are just Pods, hard to imagine how they could interfere in this way. It's hard to imagine how the tests could interfere like this even if the executions were not containerized?
Noisy neighbors can totally cause flakes in performance sensitive code, and the race detector is kinda expensive.
This job is configured to 1 core, which is probably less than the github actions.
This isn't related to prow itself, this is usage/configuration. These are ultimately Kubernetes pods.
Hosted actions runners are in VMs.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.