catalog icon indicating copy to clipboard operation
catalog copied to clipboard

Added Shipwright CLI task

Open siamaksade opened this issue 3 months ago • 20 comments

Changes

Shipwright CLI Task is added for interacting with Shipwright Builds on a Kubernetes cluster

Submitter Checklist

These are the criteria that every PR should meet, please check them off as you review them:

  • [x] Follows the authoring recommendations
  • [x] Includes docs (if user facing)
  • [x] Includes tests (for new tasks or changed functionality)
  • [x] Meets the Tekton contributor standards (including functionality, content, code)
  • [x] Commit messages follow commit message best practices
  • [x] Has a kind label. You can add one by adding a comment on this PR that contains /kind <type>. Valid types are bug, cleanup, design, documentation, feature, flake, misc, question, tep
  • [x] Complies with Catalog Organization TEP, see example. Note An issue has been filed to automate this validation
    • [x] File path follows <kind>/<name>/<version>/name.yaml

    • [x] Has README.md at <kind>/<name>/<version>/README.md

    • [x] Has mandatory metadata.labels - app.kubernetes.io/version the same as the <version> of the resource

    • [x] Has mandatory metadata.annotations tekton.dev/pipelines.minVersion

    • [x] mandatory spec.description follows the convention

        ```
      
        spec:
          description: >-
            one line summary of the resource
      
            Paragraph(s) to describe the resource.
        ```
      

See the contribution guide for more details.

siamaksade avatar Apr 04 '24 14:04 siamaksade

Related: https://github.com/tektoncd/pipeline/issues/4903

lbernick avatar Feb 15 '23 14:02 lbernick

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale with a justification. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle stale

Send feedback to tektoncd/plumbing.

tekton-robot avatar May 16 '23 15:05 tekton-robot

/remove-lifecycle stale

vdemeester avatar May 24 '23 08:05 vdemeester

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale with a justification. Stale issues rot after an additional 30d of inactivity and eventually close. If this issue is safe to close now please do so with /close with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle stale

Send feedback to tektoncd/plumbing.

tekton-robot avatar Aug 22 '23 09:08 tekton-robot

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten with a justification. Rotten issues close after an additional 30d of inactivity. If this issue is safe to close now please do so with /close with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/lifecycle rotten

Send feedback to tektoncd/plumbing.

tekton-robot avatar Sep 21 '23 09:09 tekton-robot

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen with a justification. Mark the issue as fresh with /remove-lifecycle rotten with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/close

Send feedback to tektoncd/plumbing.

tekton-robot avatar Oct 21 '23 09:10 tekton-robot

@tekton-robot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen with a justification. Mark the issue as fresh with /remove-lifecycle rotten with a justification. If this issue should be exempted, mark the issue as frozen with /lifecycle frozen with a justification.

/close

Send feedback to tektoncd/plumbing.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

tekton-robot avatar Oct 21 '23 09:10 tekton-robot

/remove-lifecycle rotten

khrm avatar Oct 21 '23 13:10 khrm

/reopen

This is part of the Roadmap and quite important. /lifecycle frozen

khrm avatar Oct 21 '23 13:10 khrm

@khrm: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

This is part of the Roadmap and quite important. /lifecycle frozen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

tekton-robot avatar Oct 21 '23 13:10 tekton-robot

@vdemeester Please reopen this.

khrm avatar Oct 21 '23 13:10 khrm

/lifecycle frozen

vdemeester avatar Oct 23 '23 08:10 vdemeester

queueing would be awesome

sibelius avatar Apr 10 '24 11:04 sibelius

This would be a great addition, so we could pool pipeline resources from multiple customers and just use queue-ing. Pending state is not desirable as timeouts might cause the pipelines to fail on kubernetes clusters.

benoitschipper avatar Apr 12 '24 07:04 benoitschipper

can we do this using resource request and limits ?

sibelius avatar Apr 12 '24 11:04 sibelius

can we do this using resource request and limits ?

Yeah you can use requests and limits for this. Problem is that if there are no resources left to run a pipeline it goes into "Pending" which eventually times out. Meaning people will get Pipeline Fails.

If there was something like a queueing system. It would instead never timeout and just wait till there are resources available. This is obviously only for when there are busy periods on the cluster. Hence the request for something like a queueing syste.

I also found that there is a possibility to do something like a lease, but that needed additional self made resources. I want to make this a solution. For our DevOps teams.

We currently give each DevOps team a certain amount of resources to perform pipeline related tasks within a namespace. But that means that a lot of resources are potentially wasted when some DevOps teams are not running their pipelines during some days. We want to instead pool all the resources for our DevOps teams so we can reserve some nodes for pipeline related runtimes. Share the resources and utilize some sort of queueing. It's all efficiency and effectiveness related :)

Hope that makes sense :)

benoitschipper avatar Apr 12 '24 14:04 benoitschipper

why PENDING timeouts ?

sibelius avatar Apr 12 '24 14:04 sibelius

why PENDING timeouts ?

Kubernetes' scheduler is, due to lack of compute space within a set quota or overall compute space on the cluster, unable to schedule the pod on any node with the requested cpu/men/storage. Making it go in pending mode.

https://kubernetes.io/docs/tasks/debug/debug-application/debug-pods/#my-pod-stays-pending

I think it might have to do something with the default timeout, this is from searching the web and within tekton docs:

Reasons for Pending State:

  • Insufficient Nodes: Your Kubernetes cluster lacks the physical nodes to accommodate the pod's CPU or memory requirements.
  • Resource Quotas: You might have resource quotas in place that limit the total number of pods or the amount of resources that can be used in a specific namespace.

Tekton Timeouts:

  • PipelineRun Timeout: Each Tekton PipelineRun has a configurable timeout. If the pod remains in "Pending" state beyond this timeout, the PipelineRun will fail with an error indicating that it timed out.
  • Default Timeout: Tekton has a global default timeout (usually 60 minutes) that acts as a catch-all if you haven't specified a PipelineRun-specific timeout.
  • Task Timeouts: You can even define timeouts at the individual Task level within your pipeline.

Customization:

  • Overriding Defaults: You can change the global default timeout by adjusting the default-timeout-minutes field in your Tekton configuration (config/config-defaults.yaml).
  • Specific Timeouts: Set more tailored timeouts at the PipelineRun and individual Task levels to match the expected execution time of your pipeline steps.

How to Find Out Specific Timeout Values

  • Examine PipelineRun: Use kubectl describe pipelinerun to see the timeout configured for that specific instance.
  • Pipeline Definition: If no timeout is set on the PipelineRun, check your Pipeline definition using kubectl describe pipeline .
  • Cluster-Wide Default: If there are no timeouts in either of the above, the cluster-wide default in Tekton's configuration applies (default apparently 60 min)

Important Considerations:

  • Timeouts are crucial to prevent stalled pipelines from consuming resources indefinitely.
  • If the resource shortage is temporary, the pod might automatically start running once enough resources become available (before timing out).

benoitschipper avatar Apr 13 '24 05:04 benoitschipper