kube-keepalived-vip
kube-keepalived-vip copied to clipboard
How to set ingress-nginx from rancher on vrrp
I have k8s on rancher 2.0 and i need to expose ingress-nginx on keepalived-vip. This is my configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: vip-configmap
namespace: default
data:
172.XXX.YYY.60: istio-system/istio-ingressgateway
172.XXX.YYY.61: ingress-nginx/ingress-nginx
I create service ingress-nginx/ingress-nginx with this config:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: NodePort
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- port: 80
protocol: TCP
targetPort: 8080
name: http
selector:
app: ingress-nginx
sessionAffinity: None
From rancher i have DeploymentSet ingress-nginx:
apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
deprecated.daemonset.template.generation: "1"
field.cattle.io/publicEndpoints: '[{"addresses":["172.XXX.YYY.7"],"port":32529,"protocol":"TCP","serviceName":"ingress-nginx:ingress-nginx","allNodes":true},{"nodeName":"local:machine-n2djs","addresses":["172.XXX.YYY.7"],"port":80,"protocol":"TCP","podName":"ingress-nginx:
nginx-ingress-controller-mdnzl","allNodes":false},{"nodeName":"local:machine-n2djs","addresses":["172.XXX.YYY.7"],"port":443,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-mdnzl","allNodes":false},{"nodeName":"local:machine-rgcqp","addresses":["172.XXX.YYY.
15"],"port":80,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-624r9","allNodes":false},{"nodeName":"local:machine-rgcqp","addresses":["172.XXX.YYY.15"],"port":443,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-624r9","allNodes":false}
,{"nodeName":"local:machine-w24dw","addresses":["172.XXX.YYY.31"],"port":80,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-j8zj9","allNodes":false},{"nodeName":"local:machine-w24dw","addresses":["172.XXX.YYY.31"],"port":443,"protocol":"TCP","podName":"ingre
ss-nginx:nginx-ingress-controller-j8zj9","allNodes":false},{"nodeName":"local:machine-6fbkn","addresses":["172.XXX.YYY.14"],"port":80,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-2blxc","allNodes":false},{"nodeName":"local:machine-6fbkn","addresses":["
172.XXX.YYY.14"],"port":443,"protocol":"TCP","podName":"ingress-nginx:nginx-ingress-controller-2blxc","allNodes":false}]'
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"DaemonSet","metadata":{"annotations":{},"name":"nginx-ingress-controller","namespace":"ingress-nginx"},"spec":{"selector":{"matchLabels":{"app":"ingress-nginx"}},"template":{"metadata":{"annotations":{"prometheus.io/port":
"10254","prometheus.io/scrape":"true"},"labels":{"app":"ingress-nginx"}},"spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"beta.kubernetes.io/os","operator":"NotIn","values":["windows"]
}]}]}}},"containers":[{"args":["/nginx-ingress-controller","--default-backend-service=$(POD_NAMESPACE)/default-http-backend","--configmap=$(POD_NAMESPACE)/nginx-configuration","--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services","--udp-services-configmap=$(POD_NAMESP
ACE)/udp-services","--annotations-prefix=nginx.ingress.kubernetes.io"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"image":"rancher/nginx-ingress-c
ontroller:0.16.2-rancher1","livenessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":10254,"scheme":"HTTP"},"initialDelaySeconds":10,"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"name":"nginx-ingress-controller","ports":[{"containerPort":80
,"name":"http"},{"containerPort":443,"name":"https"}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/healthz","port":10254,"scheme":"HTTP"},"periodSeconds":10,"successThreshold":1,"timeoutSeconds":1},"securityContext":{"capabilities":{"add":["NET_BIND_SERVICE
"],"drop":["ALL"]},"runAsUser":33}}],"hostNetwork":true,"nodeSelector":null,"serviceAccountName":"nginx-ingress-serviceaccount"}}}}
creationTimestamp: "2019-01-08T14:38:01Z"
generation: 1
labels:
app: ingress-nginx
name: nginx-ingress-controller
namespace: ingress-nginx
resourceVersion: "23157063"
selfLink: /apis/apps/v1/namespaces/ingress-nginx/daemonsets/nginx-ingress-controller
uid: ffde9e35-1352-11e9-99a8-da1a547e3557
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
app: ingress-nginx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: NotIn
values:
- windows
containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --annotations-prefix=nginx.ingress.kubernetes.io
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: rancher/nginx-ingress-controller:0.16.2-rancher1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: nginx-ingress-controller
ports:
- containerPort: 80
hostPort: 80
name: http
protocol: TCP
- containerPort: 443
hostPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 33
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
hostNetwork: true
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: nginx-ingress-serviceaccount
serviceAccountName: nginx-ingress-serviceaccount
terminationGracePeriodSeconds: 30
updateStrategy:
type: OnDelete
status:
currentNumberScheduled: 4
desiredNumberScheduled: 4
numberAvailable: 4
numberMisscheduled: 0
numberReady: 4
observedGeneration: 1
updatedNumberScheduled: 4
When i get http request to nodeIp all works good (yes, this is answer from app):
curl -v -H 'Host: iron.XXX' http://172.XXX.YYY.14
* Rebuilt URL to: http://172.XXX.YYY.14/
* Trying 172.XXX.YYY.14...
* Connected to 172.XXX.YYY.14 (172.XXX.YYY.14) port 80 (#0)
> GET / HTTP/1.1
> Host: iron.XXX
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 400 Bad Request
< Server: nginx/1.13.12
< Date: Thu, 11 Apr 2019 16:03:42 GMT
< Content-Type: application/json;charset=utf-8
< Transfer-Encoding: chunked
< Connection: keep-alive
<
* Connection #0 to host 172.XXX.YYY.14 left intact
{"timestamp":"2019-04-11T16:03:42.312+0000","status":400,"error":"Bad Request","message":"The token is missing","path":"/"}
But when i try this same on vIP i have 404:
curl -v -H 'Host: iron.XXX' http://172.XXX.YYY.61
* Rebuilt URL to: http://172.XXX.YYY.61/
* Trying 172.XXX.YYY.61...
* Connected to 172.XXX.YYY.61 (172.XXX.YYY.61) port 80 (#0)
> GET / HTTP/1.1
> Host: iron.XXX
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Content-Type: text/plain; charset=utf-8
< X-Content-Type-Options: nosniff
< Date: Thu, 11 Apr 2019 16:07:24 GMT
< Content-Length: 19
<
404 page not found
* Connection #0 to host 172.XXX.YYY.61 left intact
What i should set to get property answer from ingress?
^ @aledbf You're the ingress-nginx expert 😄
Not ideal, but solution.Problem with keepalived-vip internal load-balancer, if you give your service ports 80 and 8080, keepalived balances all traffic to 8080 port, which is default-http-backend
on all nodes.
So you shoud use some not used port and create "fake" service then it works fine!
Example of service:
apiVersion: v1
kind: Service
metadata:
annotations:
field.cattle.io/creatorId: user-xnk2m
field.cattle.io/ipAddresses: "null"
field.cattle.io/targetDnsRecordIds: "null"
field.cattle.io/targetWorkloadIds: "null"
creationTimestamp: "2019-08-15T07:35:52Z"
labels:
cattle.io/creator: norman
name: fake-ingress-nginx
namespace: ingress-nginx
resourceVersion: "11402054"
selfLink: /api/v1/namespaces/ingress-nginx/services/ingress-nginx
uid: 4f502845-bf2f-11e9-adf6-fa5dfa1eaaf6
spec:
clusterIP: 10.43.181.3
ports:
- name: http
port: 65530
protocol: TCP
targetPort: 8080
selector:
app: default-http-backend
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}