Ready check does not include current database connectivity
Preflight checklist
- [X] I could not find a solution in the existing issues, docs, nor discussions.
- [X] I agree to follow this project's Code of Conduct.
- [X] I have read and am following this repository's Contribution Guidelines.
- [ ] This issue affects my Ory Cloud project.
- [ ] I have joined the Ory Community Slack.
- [ ] I am signed up to the Ory Security Patch Newsletter.
Describe the bug
The health/ready endpoint returns OK when database connectivity is no longer given. I would expect it to check this because according to the docs: This endpoint returns a 200 status code when the HTTP server is up running and the environment dependencies (e.g. the database) are responsive as well..
Reproducing the bug
- Setup postgres service in k8s cluster
- Deploy keto to cluster with dsn pointing to postgres service
- Kill postgres
- Call keto 'health/ready' endpoint -> Returns OK
However when I try to insert/query tuples I will obviously be greeted with an error code.
Relevant log output
No response
Relevant configuration
No response
Version
0.6.0-alpha.1
On which operating system are you observing this issue?
No response
In which environment are you deploying?
Kubernetes with Helm
Additional Context
No response
Good point, that should really be the case.
The ready-checkers are registered here: https://github.com/ory/keto/blob/e9e6385fabeb333b9115cbb21276864e6d561640/internal/driver/registry_default.go#L88 Currently none are registered, which means that Keto appears healthy as soon as it runs.
From a kubernetes point of view, you dont want to include external dependencies, such as a database, in your readiness checks. Otherwise you might end up a in a cascading failures scenario where all pods are taken down and are unable to serve requests, and you are greeted with some generic error that does'nt really inform about whats causing the issue. I believe the best practice is to rely on monitoring to determine whats causing the errors and if you need to wait for database to be up you can use a initContainer or lifecycle hooks
Interesting standpoint, maybe @Demonsthere can give his opinion on this? Keto is generally not able to serve any request without a working database connection. Init migration jobs will also not complete, so you will end up in an error loop anyways on helm install. But yeah, killing a pod just because the database is unavailable is also not helpful :thinking:
Imho, from a deployment perspective:
- keto should report a ready check once it has started and is running in a stable state, as we use a init job, which has to connect to the DB, and without which the keto main deployment won't even start, we kinda assume that if keto is running then the connection to db must have been working at least for the migration part. This could be improved with a verification in the ready check if we can open a connection to the DB
- as for periodic health checks, imho a periodic downtime of the db can always happen, and as pointed we should not cascade restart all pods because of that, but maybe mark the pod as unhealthy with a more specific health check?
Sounds good, so basically we would ping the database on startup and report as ready once that succeeded. Further ready checks will not ping the database again, but always return true. Later we can add a check that pings the db periodically.
In Kubernetes, we can define the failure threshold to retry before restarting pods. Also, we can define initialDelaySeconds to wait for some operational tasks to be complete before sending health/readiness requests.
IMHO, I think that adding database health check might be good as well.
In the helm charts the values for probes are exposed and can be configured to your liking :)
Edit: we actually run into a related issue some time ago 😅 which caused us to rethink the setup a bit. We now have exposed the option to change the probes to custom ones, as seen here in kratos, and will work on reworking the healthchecks in general
Isn't this solved now? I think one of the probes now checks DB connectivity
They would have to be added here right? https://github.com/ory/keto/blob/9215c0670541b36a279fa682b685aba0381a0ae3/internal/driver/registry_default.go#L122 Maybe that was a different project, and we can transfer the change?
:O Yes, definitely, that needs to be checked! Otherwise we could run into an outage if we encounter one of those SQL connection bugs with cockroach that need a pod restart
https://github.com/ory/kratos/blob/4181fbc381b46df5cd79941f20fc885c7a1e1b47/driver/registry_default.go#L255-L273
Should be possible to more or less copy from Kratos: https://github.com/ory/kratos/blob/master/driver/registry_default.go#L252-L280
I just ran into an issue using Postgresql as backend, with calls to Keto reporting something like:
unable to fetch records...terminating connection due to administrator command (SQLSTATE 57P01) with gRPC code Unknown.
DB was up and retries didn't work. However, restarting the pod worked. I am wondering if there's a chance of this issue making it over the finish line?