Avoid PVC creation and upload artifacts from the job runner
Is your feature request related to a problem? Please describe. I'm launching hundreds of tests in my clusters and each tests require a PVC creation. Intense and continuous PVC creation/deletion can be very painful for a storage system (e.g. network storage solution). Moreover, if you are in cloud (e.g. AWS) you have to pay for this temporary storage. Why you cannot just upload the artifact by the job runner itself into Minio/S3? What are the benefits of having a separate scraper?
Describe the solution you'd like I would like to avoid the creation of a PVC and be able to upload the artifacts directly from the job runner. So I want the scraper feature (e.g. upload artifact to Minio/s3) will be handled by the job runner itself.
In the end, I would like to know if in your roadmap you are going to provide a solution that doesn't use the PVC or if can you suggest a way to avoid the creation.
Thanks Bye
hey @dr-zeta we run a separate scraper process, because we don't control container executor image. We do plan to improve how we execute container executors in our Pro edition. Ping @TheBrunoLopes for details
i think about adding sidecar, that will watch for certain dir and wait for main container completion
Basically we need to solve the following problem: how to run two containers in sequence AND have them share resources WITHOUT modifing the first (test execution) container
Some thoughts:
-
Use Argo Workflows This is a huge dependency in general, but Testkube is basically a specialized workflow tool. In this case we would get the entire archival for free (see https://github.com/argoproj/argo-workflows/blob/main/examples/output-artifact-s3.yaml ).
-
Use Beta Sidecars in Kubernetes 1.29 Run archival container as a sidecar which idles during execution, but then starts archival via its
PreStophook. This would however require setting the termination grace period large enough to allow the archival logic to run through and not getSIGKILLed. Resource sharing would happen through a shared volume mount of typeemptyDir. In general I'm not sure how viable and robust this would actually be and there seem to be further planned KEPs in this area. -
Use old-style sidecar Run archival container as another container in the job which idles during test execution and starts the archival logic after it detects that the main container has completed. This requires some coordination mechanism which hopefully does not involve querying the K8s API. Perhaps using some generic tombstone mechanism such as kubedeps. Resource sharing would happen through a shared volume mount of type
emptyDir. -
Misuse Init Containers Run both the test and the archival containers as
initConainerswhich are guaranteed to run sequentially. This would at minimum require changing log scraping to target the init container. Resource sharing would happen through a shared volume mount of typeemptyDir.
Your thoughts, @vsukhin ?
Just to note - such mechanism is already available in TestWorkflows, but they are not available in OSS. They are available in the free commercial Testkube though. The steps are orchestrated among the pod (similarly to Argo Workflows), so the PVC is not required for artifacts.
The documentation is quite limited for now, but there is already an example of Cypress test with artifacts.
hey @frederikb thank you for detailed response! yes, Beta is cool, but many users are still on 1.24 o earlier k8s version Argo CD usage is a very business decision, and it doesn't look we're ready for it today As Dawid mentioned, we already implemented TestWorkflows in our Enterprise edition and it's based on init containers.
So, sidecar option is only remained in case OSS for this request. We will take it into consideration, but not sure about priorities
Guess, we can follow the approach we use for Logs V2, when we collect logs in pod side car @exu . We can just move scraper container into the main pod and add condition to wait when the main container is completed. Should be minor change