cli
cli copied to clipboard
Proposal: Add mattmoor/kontext to repo
mattmoor/kontext is a useful way to package a local directory up and store it in a container registry.
This is useful for when a user would like to upload a directory without requiring access to a new data plane (e.g., GCP's Cloud Storage).
/cc @mattmoor
This would be nice but I wonder how tekton would be able to access it if it's not in a remote data plane somewhere,
Yeah we need to have a story on pipeline too for binary or source to image without a got resources builds
So would this belong here or somewhere else?
@chmouel would tekton not have access to a container registry?
@poy it would be very useful to skip the registry and stream the packaged local directory directly to a pipeline
@siamaksade I haven't given it deep thought, but my initial gut says that has quite a few edge cases that could make it brittle. Can you retry a pipeline if something goofs? Where is the directory stored? How does auth work?
It seems like that makes that instance have state, which normally we wouldn't want.
Agree that retry wouldn't make sense for this use-case. The use-case is to allow a developer to run their local changes through the pipeline for example on minikube before committing them to the git repository.
Is auth a concern then?
How do you mean?
@siamaksade Using a container registry to store the local directory implies the developer has write access to the registry. We can just piggy back on that auth.
However, if we push directly to the pipeline pod, then the pod will have to have an external IP with an exposed endpoint. This endpoint ideally is also secured somehow, but that means we'll have to solve for that.
Related issue upstream (I think) : https://github.com/tektoncd/pipeline/issues/924
Streaming the build context to the Pod is (probably?) going to be secure if proxied by the API server, which creates unnecessary API Server load (I believe the OpenShift folks pointed this out early in the knative/build days, cc @bparees), and it's unclear that cluster-admins would allow this in general. I'm also unsure if this would work with mTLS enabled on a cluster with mesh (worth testing), especially since post-initContainers tekton can support in-mesh builds (we had folks interested in this in the knative/build days). Another implication of this is that clients must wait for builds to schedule before hanging up, which is exacerbated in multi-build pipelines, where the same context may be used by multiple phases (you have to wait for all tasks to schedule).
The other key thing that kontext was meant to experiment with was leveraging layering to make incremental rebuilds faster, so if you touch a single file, you could augment your prior upload with a single-file layer by extracting a manifest from the prior kontext image and computing the delta. This would mean that if the Build hit the same node on-cluster, the only file transfer would end up being the layer with the single file. Personally, I also like the simplicity of the provenance story when you build from a kontext container's digest.
Sorry for the brain dump, but happy to discuss more, if needed.
Streaming the build context to the Pod is (probably?) going to be secure if proxied by the API server, which creates unnecessary API Server load (I believe the OpenShift folks pointed this out early in the knative/build days, cc @bparees),
This is what we do for what we call "binary" builds in openshift, but yes there are open (but as yet unrealized) concerns about apiserver load.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Send feedback to tektoncd/plumbing.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
Send feedback to tektoncd/plumbing.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
/close
Send feedback to tektoncd/plumbing.
@tekton-robot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten./close
Send feedback to tektoncd/plumbing.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle stale
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Send feedback to tektoncd/plumbing.
I built a simplified form of this into github.com/mattmoor/mink. It now support uploading a multi-arch version of kontext, and I've used it to run kaniko builds against clusters on amd64 and arm64 on Tekton.
/lifecycle frozen