kube-router
kube-router copied to clipboard
TCP connection reset after some hours of idle
Might be related to #521, but setting IPVS timeout (sessionAffinityConfig.clientIP.timeoutSeconds) to 86400 does not help.
TCP connection from POD to service (in our case from application to PostgreSQL service, the same with Elasticsearch service) is reset after some hours of Idle. Which timeout may affect us in this case? We do not experience such issues on another cluster (there is Openshift with VXLAN/iptables based networking).
https://github.com/kubernetes/kubernetes/issues/84041 Configuration flags for IPVS timeouts have been merged to kube-proxy. May be add same flags to kube-router?
This seems reasonable. We'll see if we can work it in for the 1.2 release. In the meantime, we'd be open to PRs if you're interested in this feature.
I've configured those settings on all hosts manually — the reset problem disappeared. So this is a solution.
Ok, I will try to implement it.
Awesome! We definitely appreciate it. Just keep in mind, that we're currently overhauling a lot of the code base with a GoBGP, Golang, and Kubernetes API update for 1.1. It may be more helpful for you to wait a few weeks before submitting so that you don't have to rebase a bunch. For more information see: https://github.com/cloudnativelabs/kube-router/issues/945#issuecomment-656358614 for more context on this.
i'am resolver this problem add tcp keepalive Postgres--tcp_keepalives_idle=60 --tcp_keepalives_interval=60 --tcp_keepalives_count=5
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stale for 5 days with no activity.
I think that we should probably wrap back around to this one. In general, I think that the default timeouts for IPVS of 5 minutes on UDP streams and 15 minutes on TCP streams is pretty generous, but it seems like there is a need for some users to be able to set them higher.