kong
kong copied to clipboard
Kong Upstream is reset by system automatically
Is there an existing issue for this?
- [X] I have searched the existing issues
Kong version ($ kong version)
3.0.0
Current Behavior
I add a upstream as below:
{
"next": null,
"data": [{
"tags": ["managed-by-ingress-controller"],
"hash_on": "header",
"name": "myservice.mynamespace.8065.svc",
"slots": 10000,
"hash_on_query_arg": null,
"hash_on_uri_capture": null,
"hash_on_header": "X-Consistent-Hash",
"client_certificate": null,
"hash_on_cookie": null,
"hash_on_cookie_path": "/",
"id": "a15c8588-98a2-498a-90d0-4860075aa6a5",
"healthchecks": {
"threshold": 0,
"active": {
"http_path": "/",
"https_sni": null,
"https_verify_certificate": true,
"healthy": {
"successes": 0,
"http_statuses": [200, 302],
"interval": 0
},
"unhealthy": {
"http_statuses": [429, 404, 500, 501, 502, 503, 504, 505],
"tcp_failures": 0,
"timeouts": 0,
"http_failures": 0,
"interval": 0
},
"headers": null,
"timeout": 1,
"concurrency": 10,
"type": "http"
},
"passive": {
"unhealthy": {
"http_statuses": [429, 500, 503],
"tcp_failures": 0,
"timeouts": 0,
"http_failures": 0
},
"type": "http",
"healthy": {
"successes": 0,
"http_statuses": [200, 201, 202, 203, 204, 205, 206, 207, 208, 226, 300, 301, 302, 303, 304, 305, 306, 307, 308]
}
}
},
"hash_fallback": "query_arg",
"created_at": 1685013308,
"hash_fallback_header": null,
"hash_fallback_query_arg": "X-Consistent-Hash",
"hash_fallback_uri_capture": null,
"host_header": null,
"algorithm": "consistent-hashing"
}]
}
It works normally for a period of time.
Today I noticed my service does not works normally, and found that upstream configuration has been reset to the default:
{
"next": null,
"data": [{
"tags": ["managed-by-ingress-controller"],
"hash_on": "none",
"name": "myservice.mynamespace.8065.svc",
"slots": 10000,
"hash_on_query_arg": null,
"hash_on_uri_capture": null,
"hash_on_header": null,
"client_certificate": null,
"hash_on_cookie": null,
"hash_on_cookie_path": "/",
"id": "a15c8588-98a2-498a-90d0-4860075aa6a5",
"healthchecks": {
"threshold": 0,
"active": {
"http_path": "/",
"https_sni": null,
"https_verify_certificate": true,
"healthy": {
"successes": 0,
"http_statuses": [200, 302],
"interval": 0
},
"unhealthy": {
"http_statuses": [429, 404, 500, 501, 502, 503, 504, 505],
"tcp_failures": 0,
"timeouts": 0,
"http_failures": 0,
"interval": 0
},
"headers": null,
"timeout": 1,
"concurrency": 10,
"type": "http"
},
"passive": {
"unhealthy": {
"http_statuses": [429, 500, 503],
"tcp_failures": 0,
"timeouts": 0,
"http_failures": 0
},
"type": "http",
"healthy": {
"successes": 0,
"http_statuses": [200, 201, 202, 203, 204, 205, 206, 207, 208, 226, 300, 301, 302, 303, 304, 305, 306, 307, 308]
}
}
},
"hash_fallback": "none",
"created_at": 1685013308,
"hash_fallback_header": null,
"hash_fallback_query_arg": null,
"hash_fallback_uri_capture": null,
"host_header": null,
"algorithm": "round-robin"
}]
}
Expected Behavior
The upstream configuration should not be reset to the default one.
Steps To Reproduce
No response
Anything else?
kong-ingress-controller: 2.6.0
Update: This upstream will be reset when kong pods are redeployed. But when I update the upstream, I found that this upstream is persisted to my postgres database. So why kong pods redeploy will cause this issue? Is is a bug to Kong?
Ping @randmonkey. Could you take a look? And we recommend using the latest version of Kong, either from the master branch or the most recent release.
@luozhouyang How did you change the fields of upstreams? Did you configure them by directly calling admin APIs? If so, they will be overwritten when KIC syncs configuration to Kong gateway.
The recommended way to configure custom fields of upstreams is using upstream field in KongIngress CRD: https://docs.konghq.com/kubernetes-ingress-controller/latest/references/custom-resources/#kongingressupstream
This issue is marked as stale because it has been open for 14 days with no activity.
Dear contributor,
We are automatically closing this issue because it has not seen any activity for three weeks. We're sorry that your issue could not be resolved. If any new information comes up that could help resolving it, please feel free to reopen it.
Your contribution is greatly appreciated!
Please have a look our pledge to the community for more information.
Sincerely, Your Kong Gateway team