avinashchundu9
avinashchundu9
`My` pool mode on CIS is auto but my service is of type nodeport. So CIS is creating a pool with nodeports. Here are my sample YAML files ``` apiVersion:...
Thank you for pointing out. I updated YAML's.
Here is the values file, args: bigip_partition: test as3-validation: true bigip_url: bigip_url custom-resource-mode: true extended-spec-configmap: f5-cis/global-spec-config insecure: true ipam: false local-cluster-name: log-as3-response: true log_level: DEBUG multi-cluster-mode: primary pool_member_type: auto bigip_login_secret:...
We produced this issue multiple times. Please set up some time with us based on your availability. We can help produce this issue. ________________________________ From: Lavanya Sirigudi ***@***.***> Sent: Wednesday,...
I understand this is a generic problem for any application. In normal situations the impact is limited to the application. Since all applications reply on coredns for dns resolutions. This...
I tried max_fails, nodelocalcache with various configurations. But none helped.
The network failure is from the client pod to the coredns. Here is one way you can reproduce this issue. Label just one of the pod in coredns with >...
But it's not a very uncommon situation, a node with coredns pod that loses network connectivity will have the same behavior. This can impact all clients in the cluster.
I over thought my previous comment. If the node losses network connectivity kubernetes will mark it as unavailable and move the pods. That should address that. This issue might be...
One proposal is a sidecar on the coredns that can check the network connectivity of the coredns pods. If the sidecar detects this network failure then exit.