kong
kong copied to clipboard
Buggy behavior after failed health check recover
Is there an existing issue for this?
- [X] I have searched the existing issues
Kong version ($ kong version
)
3.5.0 (With KIC 2.12)
Current Behavior
Sometimes when health check fails and health check recovers Kong response is still 503 for that service
Expected Behavior
Response recovers while Health check recovers
Steps To Reproduce
- In K8s environment
- Bring up a project and create a Ingress and UpstreamPolicy (With TCP or HTTP health check [TCP Preferred])
- configure health check to failure for some time (you will get 503 error)
- make that health check to success again (you may get 503 error again
Anything else?
No response
The behavior sounds expected to me. The health check status does not update immediately, and the passive health checker cannot predict if the next request will succeed. Could you elaborate?
@StarlightIbuki Hi But after the interval passes it should recover, but it doesn't. Also clearing Kong cache via Admin API fixes the issue
It happens when we have a rolling update on our K8s Deployment
@mhkarimi1383 Is the upstream failing in a predictable or manipulatable manner? So that you are sure that the status is not reflecting the fact?
@StarlightIbuki Yes I have sent a request to that pod and monitor that health check endpoint using a blackbox exporter pointing to it's k8s service
@mhkarimi1383 Could you share the config that you are using?
@StarlightIbuki
upstream:
healthchecks:
active:
healthy:
interval: 5
successes: 3
type: tcp
unhealthy:
tcp_failures: 1
interval: 5
Here is my KongIngress spec
5s seems a short interval. How long do you wait before inspecting the status?
@StarlightIbuki About 5 minutes
I still do not really understand the reproduction steps. When the health checker reports green and you get 503, what real status are you expecting?
I still do not really understand the reproduction steps. When the health checker reports green and you get 503, what real status are you expecting?
Yes, Kong said the project is unhealthy but it is healthy, clear king cache fixes the problem
I still do not really understand the reproduction steps. When the health checker reports green and you get 503, what real status are you expecting?
Yes, Kong said the project is unhealthy but it is healthy, clear king cache fixes the problem
Sorry, but let's confirm if my understanding is correct: for step 4, we configure the upstream to back to work again, and we will observe the health checker reporting unhealthy condition?
@StarlightIbuki Yes
This issue is marked as stale because it has been open for 14 days with no activity.
I have reproduced this issue locally using the master branch. @mhkarimi1383, thanks for your report.
Internal ticket for tracking: KAG-4588
_format_version: "3.0"
_transform: true
services:
- name: service_1
host: upstream_1
routes:
- name: route_1
paths:
- /1
upstreams:
- name: upstream_1
targets:
- target: localhost:80
healthchecks:
active:
timeout: 10
healthy:
interval: 5
unhealthy:
http_statuses: [500]
http_failures: 1
interval: 5
Thanks
Sometimes clearing cache will not work and we have to wait (for example 20 minutes) or we have to restart to fix the problem.