k8s-bigip-ctlr
k8s-bigip-ctlr copied to clipboard
422 declarationFullId: errors on VirtualServer creation
Setup Details
CIS Version : 2.7.1
Build: f5networks/k8s-bigip-ctlr:latest
BIGIP Version: Big IP x.x.x
AS3 Version:
f5 bigip extension as3 show-info
{
"release": "4",
"schemaCurrent": "3.35.0",
"schemaMinimum": "3.0.0",
"version": "3.35.0"
}
Agent Mode: AS3/CCCL
Orchestration: K8S/OSCP
Orchestration Version:
Pool Mode: Nodeport
Additional Setup details: <Platform/CNI Plugins/ cluster nodes/ etc>
CIS commandline arguments:
spec:
containers:
- args:
- --credentials-directory
- /tmp/creds
- --as3-validation=false
- --bigip-partition=K3s-my-partition
- --bigip-url=lb.example.com
- --custom-resource-mode=true
- --insecure=true
- --ipam=false
- --log-as3-response=true
- --log-level=DEBUG
- --pool-member-type=nodeport
command:
- /app/bin/k8s-bigip-ctlr
image: f5networks/k8s-bigip-ctlr:latest
Description
I am trying to use CIS CRD to create kubernetets dashboard. I think it was working on CIS 2.3.x, but when I upgraded to 2.7.1, it stoped working with 422 declarationFullId error.
Steps To Reproduce
apiVersion: cis.f5.com/v1
kind: TLSProfile
metadata:
labels:
f5cr: "true"
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
spec:
hosts:
- example.com
tls:
clientSSL: kubernetes-dashboard-certs
reference: secret
termination: edge
---
apiVersion: cis.f5.com/v1
kind: VirtualServer
metadata:
labels:
f5cr: "true"
name: k8s-dashboard
namespace: kubernetes-dashboard
spec:
host: example.com
pools:
- monitor:
interval: 15
recv: ""
send: "GET /oauth/health"
timeout: 5
type: http
- path: /
service: kubernetes-dashboard-proxy
servicePort: 3000
tlsProfileName: kubernetes-dashboard-certs
httpTraffic: redirect
virtualServerAddress: x.x.x.x
virtualServerHTTPSPort: 443
#virtualServerName: k8s-dashboard
Expected Result
The virtualserver, and pools should be created on BigIP.
Actual Result
The virtual server and pool are not created due to 422 error.
Partial error logs:
....
2022/03/13 20:04:58 [DEBUG] [CCCL] ConfigWriter (0xc000635620) writing section name gtm
2022/03/13 20:04:58 [DEBUG] [CCCL] ConfigWriter (0xc000635620) successfully wrote section (gtm)
2022/03/13 20:04:58 [DEBUG] Wrote gtm config section: map[]
2022/03/13 20:04:58 [DEBUG] [AS3] PostManager Accepted the configuration
2022/03/13 20:04:58 [DEBUG] [AS3] posting request to https://lb.example.com/mgmt/shared/appsvcs/declare/
2022/03/13 20:05:01 [ERROR] [AS3] Big-IP Responded with code: 422
2022/03/13 20:05:01 [ERROR] [AS3] Raw response from Big-IP: map[code:422 declarationFullId: errors:[/K3s-my-partition/Shared: propertyName "_0_kubernetes_dashboard" should match pattern "^[A-Za-z]([0-9A-Za-z_.-]{0,188}[0-9A-Za-z_.])?$"] message:declaration is invalid]
Diagnostic Information
It doesn't matter if I use virtualServerName: k8s-dashboard or not, the controller seems generate a name _0_kubernetes_dashboard that is not in any of the kubernetes VirtualServer configuration spec.
Error logs:
2022/03/13 20:22:59 [DEBUG] [CORE] NodePoller (0xc0000206c0) ready to poll, last wait: 30s
2022/03/13 20:22:59 [DEBUG] [CORE] NodePoller (0xc0000206c0) notifying listener: {l:0xc0002f02a0 s:0xc0002f0300}
2022/03/13 20:22:59 [DEBUG] [CORE] NodePoller (0xc0000206c0) listener callback - num items: 2 err: <nil>
2022/03/13 20:22:59 [DEBUG] Processing Node Updates
2022/03/13 20:22:59 [DEBUG] Processing Key: &{kubernetes-dashboard VirtualServer k8s-dashboard 0xc0001f8f00 false}
2022/03/13 20:22:59 [DEBUG] Process all the Virtual Servers which share same VirtualServerAddress
2022/03/13 20:22:59 [DEBUG] Processing Virtual Server k8s-dashboard for port 443
2022/03/13 20:22:59 [DEBUG] Configured rule: {vs_example.com_kubernetes_dashboard_proxy_3000_kubernetes_dashboard example.com 0 [0xc000757980] [0xc00083f0e0]}
2022/03/13 20:22:59 [DEBUG] Configured policy: {crd_x_x_x_443_example.com_policy kubernetes-dashboard [forwarding] true [http] [0xc0007579e0] /Common/first-match}
2022/03/13 20:22:59 [DEBUG] clientSSL secret kubernetes-dashboard-certs for TLSProfile 'kubernetes-dashboard-certs' is already available with CIS in SSLContext as clientSSL
2022/03/13 20:22:59 [DEBUG] Updated Virtual k8s-dashboard with TLSProfile kubernetes-dashboard-certs
2022/03/13 20:22:59 [DEBUG] Requested service backend kubernetes-dashboard/ not of NodePort or LoadBalancer type
2022/03/13 20:22:59 [DEBUG] Processing Virtual Server k8s-dashboard for port 80
2022/03/13 20:22:59 [DEBUG] Applying HTTP redirect iRule.
2022/03/13 20:22:59 [DEBUG] Redirect HTTP(insecure) requests for VirtualServer k8s-dashboard
2022/03/13 20:22:59 [DEBUG] Updated Virtual k8s-dashboard with TLSProfile kubernetes-dashboard-certs
2022/03/13 20:22:59 [DEBUG] Requested service backend kubernetes-dashboard/ not of NodePort or LoadBalancer type
2022/03/13 20:22:59 [DEBUG] Finished syncing virtual servers &{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:k8s-dashboard GenerateName: Namespace:kubernetes-dashboard SelfLink: UID:19bb6d97-6afb-484a-8f80-f7a9d1731019 ResourceVersion:126620341 Generation:1 CreationTimestamp:2022-03-13 19:22:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[f5cr:true] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"cis.f5.com/v1","kind":"VirtualServer","metadata":{"annotations":{},"labels":{"f5cr":"true"},"name":"k8s-dashboard","namespace":"kubernetes-dashboard"},"spec":{"host":"example.com","httpTraffic":"redirect","pools":[{"monitor":{"interval":15,"recv":"","send":"GET /oauth/health","timeout":5,"type":"http"}},{"path":"/","service":"kubernetes-dashboard-proxy","servicePort":3000}],"tlsProfileName":"kubernetes-dashboard-certs","virtualServerAddress":"x.x.x.x","virtualServerHTTPSPort":443}}
] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[{Manager:kubectl-client-side-apply Operation:Update APIVersion:cis.f5.com/v1 Time:2022-03-13 19:22:01 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:f5cr":{}}},"f:spec":{".":{},"f:host":{},"f:httpTraffic":{},"f:pools":{},"f:tlsProfileName":{},"f:virtualServerAddress":{},"f:virtualServerHTTPSPort":{}}}}]} Spec:{Host:example.com HostGroup: VirtualServerAddress:x.x.x.x IPAMLabel: VirtualServerName: VirtualServerHTTPPort:0 VirtualServerHTTPSPort:443 Pools:[{Path: Service: ServicePort:0 NodeMemberLabel: Monitor:{Type:http Send:GET /oauth/health Recv: Interval:15 Timeout:5} Rewrite:} {Path:/ Service:kubernetes-dashboard-proxy ServicePort:3000 NodeMemberLabel: Monitor:{Type: Send: Recv: Interval:0 Timeout:0} Rewrite:}] TLSProfileName:kubernetes-dashboard-certs HTTPTraffic:redirect SNAT: WAF: RewriteAppRoot: AllowVLANs:[] IRules:[] ServiceIPAddress:[] PolicyName:} Status:{VSAddress: StatusOk:}} (67.231127ms)
2022/03/13 20:22:59 [DEBUG] [CCCL] ConfigWriter (0xc000635620) writing section name gtm
2022/03/13 20:22:59 [DEBUG] [CCCL] ConfigWriter (0xc000635620) successfully wrote section (gtm)
2022/03/13 20:22:59 [DEBUG] Wrote gtm config section: map[]
2022/03/13 20:22:59 [DEBUG] [AS3] PostManager Accepted the configuration
2022/03/13 20:22:59 [DEBUG] [AS3] posting request to https://lb.example.com/mgmt/shared/appsvcs/declare/
2022/03/13 20:23:05 [ERROR] [AS3] Big-IP Responded with code: 422
2022/03/13 20:23:05 [ERROR] [AS3] Raw response from Big-IP: map[code:422 declarationFullId: errors:[/K3s-my-partition/Shared: propertyName "_0_kubernetes_dashboard" should match pattern "^[A-Za-z]([0-9A-Za-z_.-]{0,188}[0-9A-Za-z_.])?$"] message:declaration is invalid]
2022/03/13 20:23:25 [DEBUG] [2022-03-13 20:23:25,119 __main__ DEBUG] config handler woken for reset
2022/03/13 20:23:25 [DEBUG] [2022-03-13 20:23:25,119 __main__ DEBUG] loaded configuration file successfully
2022/03/13 20:23:25 [DEBUG] [2022-03-13 20:23:25,119 __main__ DEBUG] NET Config: {}
2022/03/13 20:23:25 [DEBUG] [2022-03-13 20:23:25,119 __main__ DEBUG] loaded configuration file successfully
2022/03/13 20:23:25 [DEBUG] [2022-03-13 20:23:25,120 __main__ DEBUG] updating tasks finished, took 0.0016894340515136719 seconds
2022/03/13 20:23:29 [DEBUG] [CORE] NodePoller (0xc0000206c0) ready to poll, last wait: 30s
2022/03/13 20:23:29 [DEBUG] [CORE] NodePoller (0xc0000206c0) notifying listener: {l:0xc0002f02a0 s:0xc0002f0300}
2022/03/13 20:23:29 [DEBUG] [CORE] NodePoller (0xc0000206c0) listener callback - num items: 2 err: <nil>
2022/03/13 20:23:35 [DEBUG] [AS3] posting request to https://lb.example.com/mgmt/shared/appsvcs/declare/
2022/03/13 20:23:42 [ERROR] [AS3] Big-IP Responded with code: 422
2022/03/13 20:23:42 [ERROR] [AS3] Raw response from Big-IP: map[code:422 declarationFullId: errors:[/K3s-my-partition/Shared: propertyName "_0_kubernetes_dashboard" should match pattern "^[A-Za-z]([0-9A-Za-z_.-]{0,188}[0-9A-Za-z_.])?$"] message:declaration is invalid]
2022/03/13 20:23:55 [DEBUG] [2022-03-13 20:23:55,121 __main__ DEBUG] config handler woken for reset
2022/03/13 20:23:55 [DEBUG] [2022-03-13 20:23:55,122 __main__ DEBUG] loaded configuration file successfully
2022/03/13 20:23:55 [DEBUG] [2022-03-13 20:23:55,122 __main__ DEBUG] NET Config: {}
2022/03/13 20:23:55 [DEBUG] [2022-03-13 20:23:55,122 __main__ DEBUG] loaded configuration file successfully
2022/03/13 20:23:55 [DEBUG] [2022-03-13 20:23:55,122 __main__ DEBUG] updating tasks finished, took 0.0008292198181152344 seconds
2022/03/13 20:23:59 [DEBUG] [CORE] NodePoller (0xc0000206c0) ready to poll, last wait: 30s
2022/03/13 20:23:59 [DEBUG] [CORE] NodePoller (0xc0000206c0) notifying listener: {l:0xc0002f02a0 s:0xc0002f0300}
2022/03/13 20:23:59 [DEBUG] [CORE] NodePoller (0xc0000206c0) listener callback - num items: 2 err: <nil>
2022/03/13 20:24:12 [DEBUG] [AS3] posting request to https://lb.example.com/mgmt/shared/appsvcs/declare/
2022/03/13 20:24:15 [ERROR] [AS3] Big-IP Responded with code: 422
2022/03/13 20:24:15 [ERROR] [AS3] Raw response from Big-IP: map[code:422 declarationFullId: errors:[/K3s-my-partition/Shared: propertyName "_0_kubernetes_dashboard" should match pattern "^[A-Za-z]([0-9A-Za-z_.-]{0,188}[0-9A-Za-z_.])?$"] message:declaration is invalid]
2022/03/13 20:24:25 [DEBUG] [2022-03-13 20:24:25,123 __main__ DEBUG] config handler woken for reset
2022/03/13 20:24:25 [DEBUG] [2022-03-13 20:24:25,123 __main__ DEBUG] loaded configuration file successfully
2022/03/13 20:24:25 [DEBUG] [2022-03-13 20:24:25,123 __main__ DEBUG] NET Config: {}
2022/03/13 20:24:25 [DEBUG] [2022-03-13 20:24:25,123 __main__ DEBUG] loaded configuration file successfully
2022/03/13 20:24:25 [DEBUG] [2022-03-13 20:24:25,124 __main__ DEBUG] updating tasks finished, took 0.0008761882781982422 seconds
2022/03/13 20:24:29 [DEBUG] [CORE] NodePoller (0xc0000206c0) ready to poll, last wait: 30s
2022/03/13 20:24:29 [DEBUG] [CORE] NodePoller (0xc0000206c0) notifying listener: {l:0xc0002f02a0 s:0xc0002f0300}
2022/03/13 20:24:29 [DEBUG] [CORE] NodePoller (0xc0000206c0) listener callback - num items: 2 err: <nil>
The VS seems and TLSProfile are created alright:
kubectl get vs
apiVersion: v1
items:
- apiVersion: cis.f5.com/v1
kind: VirtualServer
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"cis.f5.com/v1","kind":"VirtualServer","metadata":{"annotations":{},"labels":{"f5cr":"true"},"name":"k8s-dashboard","namespace":"kubernetes-dashboard"},"spec":{"host":"example.com","httpTraffic":"redirect","pools":[{"monitor":{"interval":15,"recv":"","send":"GET /oauth/health","timeout":5,"type":"http"}},{"path":"/","service":"kubernetes-dashboard-proxy","servicePort":3000}],"tlsProfileName":"kubernetes-dashboard-certs","virtualServerAddress":"x.x.x.x","virtualServerHTTPSPort":443}}
creationTimestamp: "2022-03-13T19:22:01Z"
generation: 1
labels:
f5cr: "true"
name: k8s-dashboard
namespace: kubernetes-dashboard
resourceVersion: "126620341"
uid: 19bb6d97-6afb-484a-8f80-f7a9d1731019
spec:
host: example.com
httpTraffic: redirect
pools:
- monitor:
interval: 15
recv: ""
send: GET /oauth/health
timeout: 5
type: http
- path: /
service: kubernetes-dashboard-proxy
servicePort: 3000
tlsProfileName: kubernetes-dashboard-certs
virtualServerAddress: x.x.x.x
virtualServerHTTPSPort: 443
kind: List
metadata:
resourceVersion: ""
selfLink: ""
$ kubectl get TLSProfile
NAME AGE
kubernetes-dashboard-certs 51m
Observations (if any)
See the partial controller logs. Not sure if gtm has anything to do with it - we are using LTM, not GTM.
Looking into this.
- GTM on BIG-IP is fine. no issues
- CRD names are fine. no issues
- Hostname is fine, no issues
- Namespace kubernetes-dashboard is fine, no issues
- Port 3000 no issues creating VS
- Redirect is fine
- Service name is fine..
i dont see the same issue.
Please use my CRDs at https://github.com/mdditt2000/kubernetes-1-19/tree/master/cis%202.8/github/2286/split/working
Please use CIS test image located here https://github.com/mdditt2000/kubernetes-1-19/blob/master/cis%202.8/github/2286/f5-bigip-ctlr-deployment.yaml
This should work fine!!
@mdditt2000
Compared with your dashboard CRD, I found a syntax error in my pools definition, , although it's not reported in the deployment:
- path: /
service: kubernetes-dashboard-proxy
servicePort: 3000
The dash in path is causing propertyName generation error, I think. path should be at the same level with monitor. Removing the leading dash AND using the new image 2.8.0 works!
I retested cis 2.7.1, it works too! Should I stay on 2.8.0 (not released yet), or revert to 2.7.1?
Thanks! Great help!
@xueshanf do you have 30 min to chat? I am based in San Jose CA. Would like to understand your use-case and chat about 2.7x verse 2.8
Recommend validate with the latest CIS release.