[feature] GitOps like pattern for Config Patches
Problem Description
I use Omni for hosted clusters, but I also use Talos locally when developing and testing. Having to keep my Talos config patches I use locally in sync with the ones I use in the Omni platform is a bit of an annoyance.
It would be cool if Omni is able to read/sync in Talos Config Patches from various sources like:
- Git
- OCI
- Storage Buckets
Even better if it supports encryption/decryption in-flight with something like SOPS.
Solution
Allow configuring patches to come from an external source:
- Git
- OCI
- Storage Bucket
I suggest having four sync targets:
- Cluster scoped: Apply all patches found in source to the whole cluster (Can be configured once)
- Control-plane: Apply all patches found in source to control-plane nodes (Can be configured once)
- Worker: Apply all patches found in source to worker nodes (Can be configured once)
- Machine X: Apply all patches found in source to X machine. (Can be configured once per machine)
This way it is possible to build a central library with my preferred patches, and apply them to my Omni clusters, without having to copy and paste them to each, or manually go through each and every patch if I decide to change them.
Alternative Solutions
Import/export functionality for patches with overwrite capabilities. This would allow me to take local patches and import them into Omni, instead of having to copy and paste, or do in-line edits in browser.
Notes
No response
This is already partially possible using omnictl apply -f path/to/patch-file.yaml.
Example machine patch (replace <guid> and <machine-uuid>:
metadata:
namespace: default
type: ConfigPatches.omni.sidero.dev
id: 500-<guid>
labels:
omni.sidero.dev/machine: <machine-uuid>
annotations:
name: 500-<machine-uuid>-machine-patch
spec:
data: |-
machine:
network:
interfaces:
- bond:
...
You can generate a guid using PowerShell's New-Guid cmdlet. The patch will be associated with the machine due to the omni.sidero.dev/machine: label. Other labels you can use: omni.sidero.dev/cluster, omni.sidero.dev/machine-set
You can also output the yaml of your existing patches using omnictl get configpatch <machine-patch-name> -o yaml. You can list all patches in your omni instance using omnictl get configpatch. You'll have to clean them up, especially remove the compressedata line.
Thank you, that is useful! I can probably make use of that to solve my issues right now. Do you know if there is any available documentation on such an approach? Specifically how to work with Omni CRs from code, and not from the UI.
But wrapping configs in the CR above, will still require me to either maintain two sets of the config, or to script my apply flow somehow, such that the config patches are correctly wrapped in the CR when applied to Omni.
Removing those obstacles would be valuable to me!
Here is the documentation I'm aware of: https://omni.siderolabs.com/reference/cli
Currently, in my testing, I apply machine patches as above not tied to the talos clusters (no cluster label), then use omnictl cluster template sync to create / update my clusters. My thought was the same in trying to create a GitOps flow for omni configs/clusters; the idea is to either configure something through argocd (or other CD app) or a pipeline but I haven't got there quite yet, should be doable though.
Shameless self-plug: I have a GitHub action in the marketplace for running omnictl in a pipeline. This action simply wraps my public Docker image that you might find useful as well.
It's not quite GitOps in the ArgoCD/Flux sense, but with the right tweaks I think it could do what you're looking for (including the different sync targets).
Here's an example:
- https://github.com/jdmcmahan/home-ops/blob/main/.github/workflows/cluster-sync.yml
- https://github.com/jdmcmahan/home-ops/actions/runs/14893929617
Thanks @jdmcmahan. I was thinking of doing something similar, where I:
- Store my patches in the correct format in my gitops repo: https://github.com/devantler-tech/platform
- Update my
deploy.yamlworkflow to apply the patches as part of the bootstrap step:
I might just download the binary in the workflow, as to minimize dependencies, but my approach will be similar to what you suggest, where I will be calling omnictl commands from the workflow.
I was able to put together a very rough POC using argocd to self manage omni and the k8s cluster config updates.
omni: https://github.com/dajrivera/omni-infra talos-k8s-cluster: https://github.com/dajrivera/k8s-cluster
With the above, in the k8s-cluster repo, I can execute omnictl cluster template sync -f k8s-cluster.yaml and apart from applying the omni api service account secret, it will bring up a fully functional cluster that will apply updates in a gitops fashion. Currently, after making modifications under helm or kustomize directories, I run the generate-manifests.sh script manually before committing but could also set it as a git pre-commit hook to automatically generate the manifests when committing changes (so I won't forget... again.).
Needs a lot of work but it works!
Another option for managing cluster templates via a pipeline
https://github.com/ktijssen/omni-cicd