k6-operator
k6-operator copied to clipboard
Feature moving to Helm from Kustomize
Hi everyone, I moved most of things out of kustomize to helm chart. I had some problems trying to set-up the MutationWebhookConfiguration and ValidatingWebhookConfiguration, if anyone can help with this i will be glad. Also, i added some liveness and readiness probe configuration on the controller-runtime in case someone will need it too due to requirements of the company cluster.
Closes #92
Hello @knmsk, Any plans to have this as part of the 0.0.7 release?
probably after the discussion of issue #92
Hi @knmsk!!
Do you know when this PR will be merged? I created a custom Helm Chart meanwhile but would be great to use the official one.
Thanks for your work! Amazing ❤️
Hi @knmsk!!
Do you know when this PR will be merged? I created a custom Helm Chart meanwhile but would be great to use the official one.
Thanks for your work! Amazing ❤️
@guillermotti i thought about having a helm and a operator format to use, so probably i'll undo the operator removal and ask @yorugac to review it. But at my company we already use this PR to deploy our k6-operator
Hi @knmsk! My apologies for such a terrible delay but I'm finally getting to your PR :sweat_smile:
This is just a heads-up as I've only started looking at the PR and testing: I'll go through all details and then add any further comments. This is really neat, and I'm amazed it covers everything, including CI!
The main thing I wanted to mention for now is the UX one: I think we should avoid adding Helm as default build everywhere. I.e. it should be an option, not a requirement. Meaning, Makefile
should support both kustomize and Helm, and CI tests running separately for both. WDYT on changing approach to having Helm as an option?
I was also thinking to update controller-gen
separately as there seems to have been issues with installation. But here this update is included as part of PR. If it is updated separately, this PR might need a rebase.
Hi there! What is the status of this Pull Request. We would need this Helm way as well!
Hi @phlegx, this PR is planned to be merged for the next big release, v0.0.8. The release will not be earlier than August / September, but hopefully, Helm support itself will be merged before that.
Hi! Is this PR still planned for the v0.0.8 release? As far as I can see, 0.0.8rc1 does not include this feature yet :( Maybe you could also tell me when Helm support is planned to be released? We are looking forward to this feature to be merged. Thank you!
Hi @yulianovikova,
Unlike k6, k6-operator does not have any set schedule around releases. The reason is simple: the time resources allocated for it are limited and workload is impacted by external requests and now major issue #138. So previous estimation became invalid. I still would like to add Helm first but cannot say the dates at the moment. TBH, I'm not happy about this either. I'll try to see if anything can be improved and will update again if / when there are changes.
For now, I added next pre-release v0.0.8-rc3 (the rc2 has a CI issue), to have a new proper tagged image.
I also had a look at the PR. Looks very thorough and good.
Only thing I would like to mention is the lack of the standard/recommended K8s & Helm labels: https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/ & https://helm.sh/docs/chart_best_practices/labels/ .
If you create a helm chart with helm create test
you'll get a _helpers.tpl
file with some functions that are very widely used, such as:
{{/*
Common labels
*/}}
{{- define "test.labels" -}}
helm.sh/chart: {{ include "test.chart" . }}
{{ include "test.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
And it's used in deployment.yaml
like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "test.fullname" . }}
labels:
{{- include "test.labels" . | nindent 4 }}
I don't include namespace in the templates since that's only redundant when passing --namespace
to helm anyway, and would only cause problems and confusion if forgotten or misconfigured in the templates. But probably more of a personal opinion than pure fact.
The scaffold helm create
charts also contain a few more configurable options on a Deployment that I think would be important for some:
- affinity
- nodeSelector
- tolerations
while we are at it: Is the prometheus ServiceMontitor really necessary? Afaik prometheus metrics are pushed to the remote write gateway and not scraped (via a ServiceMonitor).
I've updated the code with the suggestions that @StianOvrevage and @Dariusch mentioned
while we are at it: Is the prometheus ServiceMontitor really necessary? Afaik prometheus metrics are pushed to the remote write gateway and not scraped (via a ServiceMonitor).
@Dariusch I think it isn't necessary anymore, can @yorugac or someone confirm this?
Ill test this in the next few days
Any updates? :)
I had used this chart, and haven't found any major issues. Please merge asap 🙂
I tried installing this branch by doing:
helm package /path/to/charts/folder
helm install k6-operator-0.0.1.tgz
But got a few errors because the template output (after processing) wasn't producing valid k8s
resources. Nothing big, but I made a PR against @knmsk 's branch with minor fixes after I got a successful install, followed by successful test runs using the newly installed operator.
It would be fantastic if we could get this merged! It facilitates installs on private company repositories using the apps of apps pattern, and I believe it might significantly help with adoption 😄
I've successfully installed the K6 operator from Helm chart with the freshest code changes. I think it is good time to merge the changes. Any improvements can be done in new merge requests.
Hey @danielfigueiredo can you also sign the licence/cla? I want to get this PR ready, so when a Maintain wants to merge it, it's ok
Hey @danielfigueiredo can you also sign the licence/cla? I want to get this PR ready, so when a Maintain wants to merge it, it's ok
Thanks for the prompt @knmsk , CLA signed!
Not sure why it hasn't updated yet, maybe there is a refresh cycle. The agreement is signed and my email used is my primary email for this github account so should all be good. I'll keep an eye but please let me know if there is anything else missing here
Thanks @knmsk for working on this! I'm currently waiting for this to be merge to deploy using helm, is there an estimate on when this will be merged?
@Dariusch
Guys, what else do we need to merge the code?
this needs to be updated because some permissions and CRDs are not up2date
I can't get this branch working.
k8s: v1.25.9
- leader election not working
modified charts/templates/clusterRole.yaml:
kind: ClusterRole
# ...
rules:
- {apiGroups: [coordination.k8s.io],resources: [leases],verbs: [create,get,update,patch]}
# ...
- process stops at initializer job success (no tests are actually run)
@knmsk opened a PR with some updates: https://github.com/knmsk/k6-operator/pull/4. Currently testing it out, and so far I have not seen any issues.
hello. when do you think this PR will be merged?
Reached out to @knmsk and asked him to have a look at my PR. He told me that he will have a look when he gets back from his vacation.
@yorugac @Dariusch any chance that you could take a look at the updates?
Hello! Just as a heads-up for everyone watching this issue / PR: it was prioritized internally at last :joy:
My apologies for the Helm support taking so long: we really had other priorities and limited maintenance time. Not to say it's all perfect now, but we had major improvements in idempotence problem and added support for bundle installation so now its finally time to look into adding Helm support as well.
Thank you, Rogerio and everyone contributing, for your patience. @knmsk, if you could please rebase the branch some time, it'd be great :slightly_smiling_face:
Also, it seems that not all contributors here have signed the CLA: please do so! It'll not be possible to merge it otherwise.
Also, it seems that not all contributors here have signed the CLA
@yorugac, could you please check who exactly haven't signed the CLA? Because this issue is ongoing for months and I'm afraid it will block the whole merge request to be merged.