netscaler-k8s-ingress-controller
netscaler-k8s-ingress-controller copied to clipboard
ERROR - Nitro Exception while binding group member to servicegroup errorcode=258 message=No such resource
Describe the bug CIC is not able to update the pod ip as the backend in the VPX service group members.
To Reproduce
-
We were able to reproduce by deploying the ingress with with 3 services on the backend, 2 services are working fine only one is showing down as the backend member is missing.
2.CIC Version/Image : quay.io/citrix/citrix-k8s-ingress-controller:1.37.5 -
Version of VPX - 14.1.12.30
-
Environment variables (minus secrets)
Expected behavior After deploying the Ingress all services should show show pod ip in the members so that client can reach the api hosted on those pods.
Logs kubectl logs
2024-01-15 16:05:16,123 - ERROR - [nitrointerface.py:_configure_services_nondesired:2577] (MainThread) Nitro Exception while binding group member to servicegroup k8s-apexportal-webservice-service_54341_sgp_g6tphz7jrhk6c72t7dyqovf7cwchlvdr errorcode=258 message=No such resource [serviceGroupName, k8s-apexportal-webservice-service_54341_sgp_g6tphz7jrhk6c72t7dyqovf7cwchlvdr] 2024-01-15 16:05:16,154 - ERROR - [nitrointerface.py:_configure_services_nondesired:2577] (MainThread) Nitro Exception while binding group member to servicegroup k8s-apexportal-webservice-service_54341_sgp_g6tphz7jrhk6c72t7dyqovf7cwchlvdr errorcode=258 message=No such resource [serviceGroupName, k8s-apexportal-webservice-service_54341_sgp_g6tphz7jrhk6c72t7dyqovf7cwchlvdr] 2024-01-15 16:05:16,199 - ERROR - [nitrointerface.py:_configure_services_nondesired:2577] (MainThread) Nitro Exception while binding group member to servicegroup k8s-apexportal-webservice-service_54341_sgp_g6tphz7jrhk6c72t7dyqovf7cwchlvdr errorcode=258 message=No such resource [serviceGroupName, k8s-apexportal-webservice-service_54341_sgp_g6tphz7jrhk6c72t7dyqovf7cwchlvdr] 2024-01-15 16:06:04,053 - ERROR - [NSProfileHandler.py:bind_cipher_with_ssl_profile:352] (MainThread) Unable to bind cipher DEFAULT to SSL profile k8s-192.168.243.49_443_ssl 2024-01-15 17:39:14,301 - ERROR - [NSProfileHandler.py:bind_cipher_with_ssl_profile:352] (MainThread) Unable to bind cipher DEFAULT to SSL profile k8s-192.168.243.49_443_ssl 2024-01-15 19:10:39,618 - ERROR - [nitrointerface.py:set_ns_config:6968] (MainThread) Nitro exception during updating csvserver: error message=Profile does not exist 2024-01-15 19:32:38,235 - ERROR - [kubernetes.py:_parse_preconfigured_certs:419] (MainThread) certkey {'name': '.Apexanalytix.com2021-2022', 'type': 'Custom_SSL_Cipher_new'} does not have correct name/type 2024-01-15 19:32:38,235 - ERROR - [kubernetes.py:_parse_preconfigured_certs:421] (MainThread) preconfigured-certkey {"certs": [ {"name": ".Apexanalytix.com2021-2022", "type": "Custom_SSL_Cipher_new"} ] } is not in correct format,It should be in below format
Ingress Yaml:
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: citrix ingress.citrix.com/frontend-ip: "192.168.." ingress.citrix.com/secure-service-type: "ssl" ingress.citrix.com/secure-port: "443" ingress.citrix.com/frontend-sslprofile: "HSTS2022-23" ingress.citrix.com/preconfigured-certkey: '{"certs": [ {"name": "..com2021-2022", "type": "default"} ] }' name: services-ingress spec: rules: - host: services.** http: paths: - path: /api pathType: Prefix backend: service: name: -webservice-service port: number: 80 - path: /F.V*** pathType: Prefix backend: service: name: **-soapservice-service port: number: 80 - path: /odata pathType: Prefix backend: service: name: -odata-service port: number: 80 tls: - hosts: - *******..com secretName:
@Avneetdabas Could you kindly provide the YAML definition for the "apexportal-webservice-service" Kubernetes service, mainly the ports section?
We are making 2 services, Cluster IP is for Netscaler VPX and the Node port is for us to test. The nodeport one is working fine.
apiVersion: v1 kind: Secret metadata: name: XXXXXXXX-webservice type: Opaque data: RABBIT_USERNAME: XXXXXXXX RABBIT_PASSWORD: XXXXXXXX
apiVersion: apps/v1 kind: Deployment metadata: name: XXXXXXXX-webservice labels: app: XXXXXXXX-webservice spec: selector: matchLabels: app: XXXXXXXX-webservice replicas: 1 template: metadata: labels: app: XXXXXXXX-webservice spec: nodeSelector: kubernetes.io/os : linux containers: - name: XXXXXXXX-webservice image: XXXXXXXXXXX.XXX.XXXXXXXXXXX.com/XXXXXX_dev/XXXXXXXX.webservice:dev imagePullPolicy: Always ports: - containerPort: 54341
imagePullSecrets:
- name: regcred
apiVersion: v1 kind: Service metadata: name: XXXXXXXX-webservice-nodeport labels: app: XXXXXXXX-webservice spec: type: NodePort selector: app: XXXXXXXX-webservice ports: - protocol: TCP name: http port: 32003 targetPort: 54341
apiVersion: v1 kind: Service metadata: name: XXXXXXXX-webservice-service labels: app: XXXXXXXX-webservice spec: type: ClusterIP selector: app: XXXXXXXX-webservice ports: - protocol: TCP name: http port: 80 targetPort: 54341
Ok, i was able to make it work by deleting the cic pod. But looks like there is a bug in the latest version.