keto icon indicating copy to clipboard operation
keto copied to clipboard

Ready check does not include current database connectivity

Open Waidmann opened this issue 3 years ago • 14 comments

Preflight checklist

Describe the bug

The health/ready endpoint returns OK when database connectivity is no longer given. I would expect it to check this because according to the docs: This endpoint returns a 200 status code when the HTTP server is up running and the environment dependencies (e.g. the database) are responsive as well..

Reproducing the bug

  1. Setup postgres service in k8s cluster
  2. Deploy keto to cluster with dsn pointing to postgres service
  3. Kill postgres
  4. Call keto 'health/ready' endpoint -> Returns OK

However when I try to insert/query tuples I will obviously be greeted with an error code.

Relevant log output

No response

Relevant configuration

No response

Version

0.6.0-alpha.1

On which operating system are you observing this issue?

No response

In which environment are you deploying?

Kubernetes with Helm

Additional Context

No response

Waidmann avatar Feb 07 '22 16:02 Waidmann

Good point, that should really be the case.

zepatrik avatar Feb 17 '22 09:02 zepatrik

The ready-checkers are registered here: https://github.com/ory/keto/blob/e9e6385fabeb333b9115cbb21276864e6d561640/internal/driver/registry_default.go#L88 Currently none are registered, which means that Keto appears healthy as soon as it runs.

zepatrik avatar Feb 23 '22 09:02 zepatrik

From a kubernetes point of view, you dont want to include external dependencies, such as a database, in your readiness checks. Otherwise you might end up a in a cascading failures scenario where all pods are taken down and are unable to serve requests, and you are greeted with some generic error that does'nt really inform about whats causing the issue. I believe the best practice is to rely on monitoring to determine whats causing the errors and if you need to wait for database to be up you can use a initContainer or lifecycle hooks

nickjn92 avatar Mar 15 '22 08:03 nickjn92

Interesting standpoint, maybe @Demonsthere can give his opinion on this? Keto is generally not able to serve any request without a working database connection. Init migration jobs will also not complete, so you will end up in an error loop anyways on helm install. But yeah, killing a pod just because the database is unavailable is also not helpful :thinking:

zepatrik avatar Mar 15 '22 08:03 zepatrik

Imho, from a deployment perspective:

  • keto should report a ready check once it has started and is running in a stable state, as we use a init job, which has to connect to the DB, and without which the keto main deployment won't even start, we kinda assume that if keto is running then the connection to db must have been working at least for the migration part. This could be improved with a verification in the ready check if we can open a connection to the DB
  • as for periodic health checks, imho a periodic downtime of the db can always happen, and as pointed we should not cascade restart all pods because of that, but maybe mark the pod as unhealthy with a more specific health check?

Demonsthere avatar Mar 15 '22 09:03 Demonsthere

Sounds good, so basically we would ping the database on startup and report as ready once that succeeded. Further ready checks will not ping the database again, but always return true. Later we can add a check that pings the db periodically.

zepatrik avatar Mar 15 '22 09:03 zepatrik

In Kubernetes, we can define the failure threshold to retry before restarting pods. Also, we can define initialDelaySeconds to wait for some operational tasks to be complete before sending health/readiness requests.

IMHO, I think that adding database health check might be good as well.

mstrYoda avatar Jun 13 '22 20:06 mstrYoda

In the helm charts the values for probes are exposed and can be configured to your liking :)

Demonsthere avatar Jun 14 '22 06:06 Demonsthere

Edit: we actually run into a related issue some time ago 😅 which caused us to rethink the setup a bit. We now have exposed the option to change the probes to custom ones, as seen here in kratos, and will work on reworking the healthchecks in general

Demonsthere avatar Aug 10 '22 09:08 Demonsthere

Isn't this solved now? I think one of the probes now checks DB connectivity

aeneasr avatar Jan 18 '23 15:01 aeneasr

They would have to be added here right? https://github.com/ory/keto/blob/9215c0670541b36a279fa682b685aba0381a0ae3/internal/driver/registry_default.go#L122 Maybe that was a different project, and we can transfer the change?

zepatrik avatar Jan 19 '23 09:01 zepatrik

:O Yes, definitely, that needs to be checked! Otherwise we could run into an outage if we encounter one of those SQL connection bugs with cockroach that need a pod restart

https://github.com/ory/kratos/blob/4181fbc381b46df5cd79941f20fc885c7a1e1b47/driver/registry_default.go#L255-L273

aeneasr avatar Jan 19 '23 10:01 aeneasr

Should be possible to more or less copy from Kratos: https://github.com/ory/kratos/blob/master/driver/registry_default.go#L252-L280

jonas-jonas avatar Aug 09 '23 15:08 jonas-jonas

I just ran into an issue using Postgresql as backend, with calls to Keto reporting something like:

unable to fetch records...terminating connection due to administrator command (SQLSTATE 57P01) with gRPC code Unknown.

DB was up and retries didn't work. However, restarting the pod worked. I am wondering if there's a chance of this issue making it over the finish line?

aran avatar Dec 14 '23 23:12 aran