feat: API for adding or removing configuration
Describe the feature request
I'd like Gatus to have an API that allows managing its configuration so I can create a Kubernetes controller (or maybe extend https://github.com/stakater/IngressMonitorController) to watch Ingress resources and automatically create checks in Gatus for them.
An alternative could be to leverage something like https://github.com/TwiN/gatus/issues/326 and have a sidecar like the https://github.com/kiwigrid/k8s-sidecar to pass the configuration to Gatus but that would have more moving parts.
Why do you personally want this feature to be implemented?
To be able to programatically setup checks for services running in a Kuberntes cluster.
How long have you been using this project?
No response
Additional information
No response
Gatus automatically monitors the file for changes.
For Kubernetes, the "file" usually refers to a configmap.
What we could do is have an annotation on the ingress that looks like this:
gatus.io/config: |
endpoints:
- name: example
url: https://example.org
interval: 10m
conditions:
- "[STATUS] == 200"
and then have a controller merge both the static configmap with the configurations in the annotations into a single "autogenerated" configmap (or perhaps it would be a better idea to not have an autogenerated configmap, but instead, an autogenerated file inside the gatus container that takes in the configmap and the annotations, though this would require Gatus to be the controller, and well, Kubernetes' dependencies are numerous).
With this approach, forget ingresses, we may want to use services instead just because tons of ingress controllers have their own CRDs for ingresses.
One of the risks I can think of is what would happen if the cluster was very large and the configuration was updated fairly often, but fortunately, the goroutine that listens for file changes is designed to not reload said configuration extremely often: https://github.com/TwiN/gatus/blob/a1c8422c2ff2b9d0a6f184c99e4dc728d3f2cd75/main.go#L88
I created this for now, but I don't know what kind of implementation we want to go for yet: https://github.com/TwiN/gatus-controller
I still think that having an API so we can have independent controllers would be better. For example, we might want to run Gatus outside of the cluster and have it being updated by a controller running in the cluster...
The file/configMap updates is more like a workaround. And for that, I think that supporting multiple configuration files as proposed in https://github.com/TwiN/gatus/issues/326 would help. That way we can use a sidecar (like https://github.com/kiwigrid/k8s-sidecar) to watch for configmaps or secrets with a specific label and copy them into the container. Similarly to what Grafana does to load dashboards.
Regarding the controller design, these are some examples: https://github.com/stakater/IngressMonitorController that uses custom resources and https://gitlab.com/checkelmann/synop that uses annotations or even https://github.com/luisdavim/synthetic-checker (a PoC of mine) that also uses annotations but can read the config from a secret referenced from an annotation.
I think going the gatus-operator route would be a pretty neat idea but that assumes @TwiN has the appetite in supporting a Kubernetes operator. As a very high level example:
Custom resource to define a monitor
apiVersion: gatus.io/v1beta1
kind: GatusMonitor
metadata:
name: my-custom-monitor
spec:
url: blog.default.svc.cluster.local
type: http
method: get
interval: 5m
instance:
secretRef: my-gatus-instance
Secret containing instance
apiVersion: v1
kind: Secret
metadata:
name: my-gatus-instance
spec:
stringData:
url: https://my-gatus-instance.com
api-key: abc123
This would need to be expanded on to support alerts but I wanted to keep it simple.
@onedr0p To keep it as simple as possible, I would much rather leverage annotations w.r.t. Kubernetes implementation.
The controller would simply have to retrieve the configuration from the annotations and send an HTTP request to Gatus' admin API (or perhaps "render" a configuration into a config map, which would be mounted on the Gatus pod. This is probably a better idea & is possible because support for using multiple configuration files has been added not long ago).
Ultimately, the configuration on the ingress resource (perhaps using the service resource would be wiser, as different ingress controllers use different CRDs) would only define one or a handful of endpoints related to only the service in question, which means that the size of the annotation wouldn't be too large.
Something like what I mentioned before:
metadata:
annotations:
gatus.io/config: |
endpoints:
- name: example
url: https://example.org
interval: 10m
conditions:
- "[STATUS] == 200"
Annotations sound good for a phase one approach but ideally if you were to stick with annotations it would be better to break it up instead of one multi line config value.
annotations:
gatus.io/enabled: "true"
gatus.io/name: my-ingress # optional, defaults to ingress name
gatus.io/url: https://thing.io # optional, defaults to the first ingress host URL.
gatus.io/statusCode: "200" # optional, defaults to 200
Dumping a whole endpoints object in the array does give you more customizing but information like name and url you can already retrieve from kubernetes.
In my example I would just have a single enabled annotation that enables it and the operator would build that endpoints object for us using the defaults.
That'd allow people to configure no more than a single endpoint per Service.
I think going for the multi-line config is a much better solution, both from a maintenance perspective (e.g. not having to update the controller every single time a new parameter is added) and from a usability perspective (no need to document each annotation, can just document a single one, and the configuration is the same outside of Kubernetes too).
Take proxy.istio.io/config for example.
Regardless, this may be more of a personal preference, but as a seasoned Kubernetes engineer, I would take a single annotation to configure something like Gatus over something like this.
In the case you laid out would it be better to just support the application looking for secrets / configmaps with that configuration instead of putting it all in a annotation (which is pretty ugly IMO). If not using features of the kube APIs to extract info (e.g. ingress host name or ingress name) I don't see why (service/ingress) annotations would be a good path forward if we're just dumping in some static config.
apiVersion: v1
kind: ConfigMap
metadata:
name: example-gatus
namespace: default
labels:
gatus.io/config: "true"
data:
config.yaml: |
- name: example
url: https://example.org/
interval: 10m
conditions:
- "[STATUS] == 200"
The operator can then look and see what configmaps or secrets have the gatus.io/config: "true" label and then builds the config and uses it.
Alternatively, for a solution available today people could use this container as a sidecar to Gatus and that pretty much eliminates the need for an operator.
Basically, this sidecar would scan for any configmaps or secrets with the label gatus.io/config: "true" and when found mount the resource's data or stringData into the gatus container at the specified path. Gatus can then merge all the yaml files into one and use it as a config.
Alternatively, for a solution available today people could use this container as a sidecar to Gatus and that pretty much eliminates the need for an operator.
Basically, this sidecar would scan for any configmaps or secrets with the label
gatus.io/config: "true"and when found mount the resource'sdataorstringDatainto the gatus container at the specified path. Gatus can then merge all theyamlfiles into one and use it as a config.
I mentioned that on the original post
It works quite well. I'm very happy with the results and the best thing is @TwiN doesn't need to add anything more to support this. I could contribute some docs around it if needed.
Here's the implementation in my Flux managed home Kubernetes cluster.
https://github.com/onedr0p/home-ops/commit/158d4e96cfbec7d2e2ab6b7c88d95662a2b94be0
@onedr0p yes, gatus.io/enabled annotation and the container sidecar is a nice solution. Could you please add documentation about that ?
Here's what I did using the Helm Chart via Terraform (anonymised).
First, in my values.yaml I add a bunch of configuration:
- Enable service account creation for
gatus:
serviceAccount:
create: true
name: gatus
autoMount: true
- Add two sidecars to
gatus(onlyconfig-syncis actually required -bashis just for inspecting/debugging):
sidecarContainers:
bash:
image: bash:latest
imagePullPolicy: IfNotPresent
command: ["watch"]
args: ["ls", "/shared-config/"]
volumeMounts:
- { name: shared-config, mountPath: /shared-config }
config-sync:
image: ghcr.io/kiwigrid/k8s-sidecar:1.25.3
imagePullPolicy: IfNotPresent
env:
- { name: FOLDER, value: /shared-config }
- { name: LABEL, value: gatus.io/enabled }
- { name: NAMESPACE, value: ALL }
- { name: RESOURCE, value: both }
volumeMounts:
- { name: shared-config, mountPath: /shared-config }
- And we also define the additional volume used by the sidecar containers:
extraVolumeMounts:
- name: shared-config
mountPath: /shared-config
readonly: false
- And finally, we add labels to all
gatusresources (which means the auto-generatedconfig.yamlConfigMap will get these labels too:
extraLabels:
"gatus.io/enabled": "true"
Secondly, we update gatus to use a configuration directory by setting the environment variable GATUS_CONFIG_PATH - I've done this in Terraform, but this can also be set in values.yaml:
resource "helm_release" "gatus" {
...
set {
name = "env.GATUS_CONFIG_PATH"
value = "/shared-config/"
}
...
}
Now you're ready to rock.
Any ConfigMaps created with the label "gatus.io/enabled" = "true" will be automatically sucked into gatus.