seperate affinity assistant setting on workspace
Feature request
Config affinity assistant on each workspace override default setting in feature-flags
Use case
I've got same problem with this issue: https://github.com/tektoncd/pipeline/issues/3480
In my case, I have a PVC for code, and another PVC for build cache, and I will use a NAS StorageClass for cache PVC which will share data across AZs.
the code PVC will share between different task, such as git-clone-task and build-task, so this PVC needs affinity assistant
for now, if I set disable-affinity-assistant to true globally:
- code PVC with normal PV may failed if tasks schedule to different node,
- code PVC with nas PV need to clean up every time build-task done.
if I set disable-affinity-assistant to false globally:
- cache PVC will not work (for
more than one PersistentVolumeClaim is bound)
and If i can set code PVC to use affinity assistant, and disable cache PVC's affinity assistant, every thing works! and we just need to raise error if more than one affinity assistant exists.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale with a justification.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.
/lifecycle stale
Send feedback to tektoncd/plumbing.
/remove-lifecycle stale We have the same issue. We want to use AWS EFS as a build cache, but also use another ephemeral PVC for the usual workspace.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale with a justification.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.
/lifecycle stale
Send feedback to tektoncd/plumbing.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.
/lifecycle rotten
Send feedback to tektoncd/plumbing.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen with a justification.
Mark the issue as fresh with /remove-lifecycle rotten with a justification.
If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.
/close
Send feedback to tektoncd/plumbing.
@tekton-robot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopenwith a justification. Mark the issue as fresh with/remove-lifecycle rottenwith a justification. If this issue should be exempted, mark the issue as frozen with/lifecycle frozenwith a justification./close
Send feedback to tektoncd/plumbing.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@lookis @icereed your use case should now be addressed by the new "coscheduling" feature, which allows all of a pipelinerun's pods to be scheduled to one node, and multiple PVCs to be bound to a single taskrun in a pipelinerun. We'd appreciate if you can share any feedback on https://github.com/tektoncd/pipeline/issues/6990!