k8s-bigip-ctlr icon indicating copy to clipboard operation
k8s-bigip-ctlr copied to clipboard

Can bigip-ctlr be used to sync worker nodes to a GTM pool

Open oe-hbk opened this issue 3 years ago • 5 comments

Apologies for creating a non-standard issue, as it's just a question. If there is a better place to ask this, please let me know.

We have a setup which I think is probably unique. Instead of relying on LTM for Ingress, we use a combination of GTM and nginx-ingress-controller.

When creating a k8s cluster, we install nginx-ingress-controller as a daemonset, and then create a GTM WideIP to represent all worker nodes and take nodes in and out of this pool as we add/remove nodes. This is manual and error prone. I've been reading the docs on F5 CIS and though it mentions config parameters for GTM, I see no way to do what I want.

Can F5 CIS modify GTM pool members based on node availability?

Thanks in advance

oe-hbk avatar Sep 09 '21 21:09 oe-hbk

Hi,

You did well to open an issue to ask your question!

Today F5 Container Ingress Services relies on LTM AND GTM to handle the DNS based configuration; so your scenario is not supported today unfortunately. This is a use case we intend to investigate with CIS (GTM only without LTM) but it is not prioritized yet (limited demand for it).

In your situation CIS would have to :

  • Monitor your Kubernetes Cluster and especially the nodes (NodePort)
  • Create a wideip that would distribute traffic to the different nodes

One question: How would you expect GTM to monitor the availability of the node/services ? just an ICMP monitor ? something more advanced ?

FYi, Some customers have created their own operator today to handle this use case. I don't know if it's something you've considered doing; but i thought it was worth highlighting

nmenant avatar Sep 13 '21 11:09 nmenant

Thanks @nmenant for your reply. I realize this may have limited demand. I'll keep an eye out for your future releases to see if or when it gets done.

The health check we are doing now via manual setup is an https check since nginx ingress controller is running as daemonset on each worker node. It can't really check if a service (can be either nodeport or clusterip in our case) ) is really alive. If nginx is responsive (https check) then we're reasonably sure it can route to proper service with whatever endpoint iptables rule kube-proxy has setup to route to the correct pod(s)

We will probably go with our own operator as well in the absence of this.

Thanks again

oe-hbk avatar Sep 13 '21 13:09 oe-hbk

Oh, and feel free to close this issue, unless you want to keep it as the enhancement request.

oe-hbk avatar Sep 13 '21 13:09 oe-hbk

before closing @trinaths can we create a Jire for PM tracking

mdditt2000 avatar Sep 14 '21 22:09 mdditt2000

created CONTCNTR-3028 for internal tracking

trinaths avatar Nov 23 '21 16:11 trinaths