pipeline icon indicating copy to clipboard operation
pipeline copied to clipboard

Configurable grace period for TaskRun pods in ImagePullBackOff

Open rinckm opened this issue 3 years ago • 12 comments

Feature request

Introduce a configurable grace period for TaskRun pods to be in ImagePullBackOff without failing the TaskRun.

Use case

We are using Tekton to execute TaskRuns. In our use case images for TaskRun pods cannot be pulled directly because container registry credentials must not exist in the namespace for security reasons. Instead, there’s another component in the cluster that pulls images for TaskRun pods. In case of delays in image pulling a TaskRun’s pod may be subject to ImagePullBackOff and recovers after the image has been provisioned on the respective node.

With PR #4921 (fail TaskRuns on ImagePullBackOff) we now see sporadically failing TaskRuns.

We propose to introduce a configurable grace period where ImagePullBackOff is tolerated and does not fail a TaskRun.

rinckm avatar Jan 12 '23 07:01 rinckm

@rinckm thank you for sharing your use case for this feature request; we had discussed supporting this as future work for TEP-0092 -- the TEP has not been implemented yet, we welcome contributions in implementing it

cc @bobcatfish @Aleromerog

/help-wanted

jerop avatar Jan 12 '23 12:01 jerop

Hi, may I try implementing the TEP-0092? 👋 /assign

shuheiktgw avatar Jan 22 '23 23:01 shuheiktgw

Hello @shuheiktgw, thank you for offering to implement this, that would be great. The TEP already exists and is in an implementable state. @jerop @bobcatfish @Aleromerog FYI

afrittoli avatar Jan 24 '23 11:01 afrittoli

@shuheiktgw thank you for offering to implement -- @EmmaMunley has also started looking into implementing TEP-0092, maybe you can collaborate?

jerop avatar Jan 24 '23 15:01 jerop

Sure, I'm happy to collaborate🙂 Hi @EmmaMunley, how is the implementation going so far? I'd appreciate it if you would push any WIP changes so that I can see if there is anything I can help!

shuheiktgw avatar Jan 24 '23 22:01 shuheiktgw

Hi @shuheiktgw! Sure! I am working on implementing the scheduling timeout feature first as part of this issue: #4078.

EmmaMunley avatar Jan 25 '23 22:01 EmmaMunley

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale with a justification. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle stale

Send feedback to tektoncd/plumbing.

tekton-robot avatar Apr 25 '23 23:04 tekton-robot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten with a justification. Rotten issues close after an additional 30d of inactivity. If this issue is safe to close now please do so with /close with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle rotten

Send feedback to tektoncd/plumbing.

tekton-robot avatar May 25 '23 23:05 tekton-robot

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen with a justification. Mark the issue as fresh with /remove-lifecycle rotten with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/close

Send feedback to tektoncd/plumbing.

tekton-robot avatar Jun 25 '23 00:06 tekton-robot

@tekton-robot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen with a justification. Mark the issue as fresh with /remove-lifecycle rotten with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/close

Send feedback to tektoncd/plumbing.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

tekton-robot avatar Jun 25 '23 00:06 tekton-robot

Thanks @dibyom for reopening this issue.

We are running into this issue and looking for a potential solution.

The issue is, the node where the pod is scheduled often experiences registry rate limit. The image pull failure because of the rate limit returns the same warning (reason: Failed and message: ImagePullBackOff). The pod can potentially recover after waiting for enough time until the cap is expired. Kubernetes can then successfully pull the image and bring the pod up.

We have this issue reported by multiple end users: https://github.com/tektoncd/pipeline/issues/7184

The problem statement for this issue is to be able to continue waiting when a step or sidecar container reports the container is waiting with ImagePullBackOff.

TEP-0092 proposed to solve this issue. TEP-0092 is now marked as deferred while proposing TEP-0132. TEP-0132 has much wider scope and focuses on proposing a generic solution for creating a queue of pipelineRun/taskRuns/etc. Until we resume TEP-0132, I would like to check with the community to propose a solution for this particular problem statement. Thoughts? @tektoncd/core-maintainers

pritidesai avatar Feb 05 '24 20:02 pritidesai

Revisiting the TEP-0132 , it might not be able to resolve this particular issue as the taskrun controller treats ImagePullBackOff as permanent error. To overcome this limitation, we need a solution to not just avoid ImagePullBackOff as there can be other non-tekton resource in the cluster causing rate limit problem. We need a solution (opt-in) to avoid treating ImagePullBackOff as a permanent error.

pritidesai avatar Feb 05 '24 21:02 pritidesai

Given that #7666 is merged (see docs), I'll go ahead and close this.

vdemeester avatar Jul 01 '24 07:07 vdemeester