smit thakkar
smit thakkar
`TF_PLUGIN_CACHE_DIR` is it concurrency safe?
Prepopulating provider cache as suggested by @lorengordon works like a charm, here is an example how we do it in our atlantis ### Provider List ```bash ❯ tree hack/providers/ -a...
> The assumption being that concurrent terraform init won't mess around with TF_PLUGIN_CACHE_DIR because all providers are already present? Yes, that is correct, but make sure all provider versions are...
@SarthakJain26 that wouldn't be the case if any step in workflow exits with a non-zero status code. I think it's default behaviour of Argo Workflows. Let me try to reproduce...
@ksatchit This is similar to the deployment of Chaos Infra with Namespaced scope, my requirement is related to assigning Chaos Infra to multiple projects.
When we tried with DocumentDB all queries [here](https://github.com/search?q=repo%3Alitmuschaos%2Flitmus%20%24lookup&type=code) with `$lookup` started failing with the below error 😢 causing API endpoints for the control plane to throw 5xx ``` shell aggregation...
Lookups were only thing that was failing unfortunately due to so many errors I couldn't test each and every flow 😢. Sorry I don't have better insights for you.
It would be nice to create a metadata-only informer to speed lookups for Labels or use cached client in subscriber. This should speed up the lookups https://firehydrant.com/blog/dynamic-kubernetes-informers/ https://medium.com/@timebertt/kubernetes-controllers-at-scale-clients-caches-conflicts-patches-explained-aa0f7a8b4332
On a side note to get better visibility on performance and stability issues would be great to kickoff efforts on having instrumentation using https://opentelemetry.io/ (logs, metrics, traces and profiles). This...
gdeploy 2.0