zammad-helm
zammad-helm copied to clipboard
Configuration as code
A possibility to not click configuration in the web interface, but to manage it as code and roll it out idempotently. Currently we have customInit, which works great, but does not scale with many objects ect
Requirements
All kind of objects, jobs, triggers ect. are definied as code, in a custom directory
$ tree config
config
├── jobs
│ ├── Ticket_delete.json
│ └─── Email_user_delete.json
├── objects
│ ├─── group_room.json
...
│ └── user_shoe-size.json
└── triggers
├── Email_Answer_Close.json
├── Email_Answer_30days.json
- idempotent
- no manually interaction (like specify an API token)
- defined as
.json
, not as.rb
(like /db/seeds) (optional) - a switch set in the pipeline, that it only executes after pipeline and not after every container restart. (optional)
possible implementation
- The config directory is created as globing configmap.
- A script (similar to seeds.rb) executing
create_or_update
on Triggers , Object ect- in zammad-init
- or as extra job (only executed once after pipeline)
Sounds like a job for terraform / opentofu provider :D
Hello @klml, before we talk about the how of the implementation and if it is actually something that should be part of zammad-helm or even zammad, I would first like to see if the what is right - what is your actual use case.
If I understood it correctly, you have a lot of code that should be run after the helm application is created or updated, to ensure a consistent state in the DB according to your expectations, e.g. sets of triggers etc. And customInit
seems to feasible for you to hold/maintain so much code. Is that correct so far?
@monotek could you briefly outline how a solution based on terraform could be designed for such a use case?
From my side, some thoughts about how it could be done without a lot of changes:
- Place the init files in your storage folder and call them from
customInit
. - We could think about adding a configuration for a
Values.postInitContainers
which allow free specification of containers to run in the init job. These could refer to existing volume mounts, configMaps etc. which would be separately maintained. - Similarly, we could make it possible to modify the
zammad.Volumes
andzammad.VolumeMounts
dynamically so that you could mount existing data to be called from thecustomInit
.
Handling JSON payloads and executing code on it seems to be a bit too specific for a generic helm chart TBH. But that could be done in your userland, if the needed mechanisms are in place.
Perhaps @monotek can even advise a solution that would not require any changes to the Helm chart. We could definitely think about adding documentation for such a solution.
Sorry, this was more like a joke :D
First zammad would need to be able to configure everthing via rest api. If this is possible you could already do such stuff via init container & curl post.
For Terraform/OpenTofu a new Terrform provider would be needed. Creating and maintaining that would be a lot of work. I would not do it.
I'm not sure if config is entirely possible via rails console yet? If so, and this is realy needed by somebody, he should already be able to do it via custom init containers.
And customInit seems to feasible for you to hold/maintain so much code. Is that correct so far?
yes
Place the init files in your storage folder and call them from customInit. We could think about adding a configuration for a Values.postInitContainers which allow free specification of containers to run in the init job. These could refer to existing volume mounts, configMaps etc. which would be separately maintained. Similarly, we could make it possible to modify the zammad.Volumes and zammad.VolumeMounts dynamically so that you could mount existing data to be called from the customInit.
I am fine with all three.
zammad.Volumes and zammad.VolumeMounts dynamically so that you could mount existing data to be called from the customInit.
seems to be the best solution for me
@monotek
First zammad would need to be able to configure everthing via rest api.
we tried this at first, but we got this disadvantages:
- we have to handle an apitoken
- the api got no idempotenz like ruby
create_or_update
and we have to manage this by ourselves
@klml I agree. Please see #272 for a proposal for an integration of custom volumes and volumeMounts. Would this solve your problem?
IMHO the actual configuration management code is not kubernetes specific and could reside in your own toolchain. You can use a globbing configMap
as described, and call it from the customInit
block. Ok?