Track an entire helm release
Hello,
I'm using helm to deploy on kubernetes and sometimes with hooks. During my CD jobs, if one job failed, helm display message like "Backoff limit exceed" (saying that one job is failed and exceed its limit of retry).
In order to ease the work of developers, I want to display logs of failed job directly in CI output. Currently, with Kubedog, I need to specify name of each resources that I want to track. It would be really great to be able to use it to follow an entire helm release (with all resources created by it).
For example:
running kubedog rollout track helm <release_name> which runs a kubedog rollout track on each resources created in this release.
@ThomasBoni Hi!
We have another tool, called werf: https://github.com/flant/werf.
It can deploy helm chart and track entire helm release. It uses kubedog library under the hood.
I'd recommend this how-to as a place to start: https://werf.io/how_to/deploy_into_kubernetes.html
P.s. It is better to try out alpha release of werf, because it uses latest kubedog multitrack version, which is far superior in appearance of tracking output. To use alpha:
# install multiwerf
curl -L https://raw.githubusercontent.com/flant/multiwerf/master/get.sh | bash
# download and activate latest alpha werf version
source <(./multiwerf use 1.0 alpha)
# run werf deploy (see more info in howto: https://werf.io/how_to/deploy_into_kubernetes.html)
werf deploy --help
Hi @distorhead, thanks ! I will try it.
Also, there was a new release today: https://github.com/flant/kubedog/releases/tag/v0.3.0
It features multitracker, which is a new way to track resources instead of old kubedog rollout. It is capable of tracking multiple resources at the same and also have some modes of operation (more details in the release and the source code ;) ).
The important thing is to migrate from using old kubedog rollout to new kubedog multitrack. OR use werf instead of kubedog to track whole helm release, werf already uses the same kubedog multitracker internally.
Closing the issue, if you have future questions — you may ask it here, or in our CNCF slack #werf channel: https://cloud-native.slack.com/messages/CHY2THYUU
@distorhead it would be very beneficial to be able to use kubedog with helm directly without using werf. We are using helmfile to manage helm releases and we would like to use kubedog with it. Werf is a very cool tool but it doesn't support helm 3, and we already have helmfile and all of its features deeply integrated within our CI/CD.
There are many tools out there to manage helm releases and I think kubedog should be agnostic to the "release manager" used and be able to work with native helm directly.
Thank you for your amazing work on this project! :)
@distorhead it would be very beneficial to be able to use kubedog with helm directly without using werf. We are using helmfile to manage helm releases and we would like to use kubedog with it. Werf is a very cool tool but it doesn't support helm 3, and we already have helmfile and all of its features deeply integrated within our CI/CD.
There are many tools out there to manage helm releases and I think kubedog should be agnostic to the "release manager" used and be able to work with native helm directly.
Thank you for your amazing work on this project! :)
Hi!
The thing is: it is incorrect to use kubedog with helm without patching helm, because helm as a package-manager should watch release resources when installing some package. It is not enough to call kubedog right after helm, because that way helm release status will not indicate real release status when kubedog detected some runtime-error. Kubedog should be called somewhere in the middle of deploy process in helm itself. That is, among other things, what is werf about.
Kubedog itself was planned as a golang-library, not a CLI, and it is agnostic to the release manager from that point of view. Any release manager could import kubedog and enable resource-tracking for this manager. By the way, which release managers will benefit from using kubedog, how do you think?
In theory it is possible to implement werf support in helmfile quickly: implement this interface https://github.com/roboll/helmfile/blob/master/pkg/helmexec/helmexec.go#L11 with werf as "backend". Also we should add helm-3 support to the werf.
@distorhead thank you for the elaborate reply.
I agree that kubedog should ideally be integrated within helm, I have also commented on the PR which you have opened within the helm project.
But, until that happens (if it ever does even) I am looking for another way to implement this. I was thinking as well about adding it to the helmfile code since this is a fast moving projected so such a feature can be implemented and released quickly, but I have two concerns:
- How to handle the helm
--waitand--atomicflags for automatic rollback? - Can we output both the normal output from helmfile and the kubedog output?
Thanks!
@dudicoco +
Werf is a very cool tool but it doesn't support helm 3, and we already have helmfile and all of its features deeply integrated within our CI/CD.
helmfile support would be a great feature I think.
We had a dilemma whether to use werf if there is no helmfile support out of the box because for our simply stupid case of the cluster with little services, db, cache, etc, helmfile looks like a silver bullet that solves config hell in an elegant way and reduces the overall project complexity, and werf is a cool and quite simple tool to use for CI/CD.
Keeps hopes that this feature will see stable werf realize(and even unstable, heh).
@dudicoco @F1NYA in a recent (alpha + beta) version of werf, the Helmfile support has been introduced. Please check details in this PR: https://github.com/werf/werf/pull/2644
We are using extracting via JQ.
Jenkins code:
script.sh("""
helm get manifest ${kubernetesDeployment} --kube-context ${env.kubeEnvName} --namespace ${NS} \
| kubectl get -o json -f - | jq '.items | group_by(.kind)
| map({"kind":.[0].kind, "items":map({"ResourceName":.metadata.name,"Namespace":.metadata.namespace,"SkipLogsForContainers":["istio-proxy"]})})
| reduce .[] as \$item ({"Deployments":[],StatefulSets:[]};
if \$item.kind=="Deployment" then . + {"Deployments":\$item.items}
else if \$item.kind=="StatefulSet" then . + {"StatefulSets":\$item.items} else . end
end)' \
| kubedog multitrack --kube-context ${env.kubeEnvName}
""")
Our current vision is:
- Kubedog is a library and is meant to be embedded in other software. Kubedog is responsible for populating Storage with resource states, events, logs, thanks to new Dynamic Tracker. Getting something from Storage, formatting and printing it now is a responsibility of a developer who embeds Kubedog in its software. We have a reference implementation for Formatter here.
- If you want out-of-the-box experience then use werf. Also, in the future we will provide Nelm as a standalone binary, which is meant to be a direct replacement for Helm (unlike werf, which also handles building images and other things).
- Support for things like Helmfile can be added by embedding Nelm into Helmfile.
- Vanilla Helm will probably never embed Kubedog. Helm is pretty slow-moving right now and embedding something of this scale would probably never be done (reason is not us).