perf-tests
perf-tests copied to clipboard
Make PV tests work on kubemark clusters
Enabling PVs in the load test failed the kubemark presubmit because scheduler failed to schedule pods with PVs
Unable to schedule test-y6u59n-1/small-statefulset-0-0: no fit: 0/101 nodes are available: 1 node(s) were unschedulable, 100 node(s) had volume node affinity conflict.; waiting
In order to make it work on kubemark we'll most likely have to change a few places in the code (or ensure they already work this way) so they're faking operations related to attaching and mounting PDs in kubemark
- Attach Detach Controller
- HollowKubelet
- Scheduler
- ?
Ref. https://github.com/kubernetes/perf-tests/issues/704
/good-first-issue
@mm4tt: This request has been marked as suitable for new contributors.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue command.
In response to this:
/good-first-issue
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I'd like to pick this up, where would I begin?
Hey, @Jukie, that's great to hear!
This one is a bit more complicated, I'll need to think about it more and come up with a more concrete list of steps. Unfortunately, I won't have time to do it until ~mid next week. In the meantime, feel free to pick another issue from the help wanted list. I think https://github.com/kubernetes/perf-tests/issues/595 is relatively simple and will get you through all the prep work required to start contributing to kubernetes (forking repos, signing CLA, etc.).
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
We still want to do it...
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale /lifecycle frozen
/assign