kubernetes-ingress icon indicating copy to clipboard operation
kubernetes-ingress copied to clipboard

Ingress is not serving on service's targetPort

Open Meroje opened this issue 6 years ago • 13 comments
trafficstars

Hi, Given the following set of manifests (shortened to relevant parts)

kind: Ingress
spec:
  rules:
  - host: foo.bar.com
    http:
      paths:
      - backend:
          serviceName: foo-app
          servicePort: http
        path: /

---
kind: Service
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http

---
kind: Deployment
spec:
  template:
    spec:
      containers:
      - ports:
        - name: http
          containerPort: 8080
          protocol: TCP

the resulting backend is

backend default-app-foo-80
  mode http
  balance roundrobin
  option forwardfor
  server SRV_qZUbo 100.96.3.89:80 disabled check weight 128

as it is using pod ips directly it should use port 8080, port 80 would be used by call to app-foo.default.cluster.local and sent by kube-proxy to 8080

Meroje avatar Jun 25 '19 10:06 Meroje

Confirming I saw this as well in evaluating.

jhohertz avatar Jun 27 '19 15:06 jhohertz

named targetPort in service is not supported yet

However from example above in ingress you have servicePort: http that means that path should go to service foo-app and named port http when we look at service , service has port with name http and it is port 80 and targetPort: http if target port was 8080, it would have worked

i understand that ingress allows defining ports in multiple ways and some of them are not supported yet.

oktalz avatar Jul 03 '19 07:07 oktalz

@Meroje I don't think it's a good idea to put the service into the server line as the service is just another abstraction and increases latency, imho.

git001 avatar Jul 04 '19 12:07 git001

@git001 there is an issue however how named ports are handled and issues similar to this one will be resolved soon (next week). clinet-go witch is used here is finally released with go modules support, so it can be used, and with that, endpoints part seems to be working as expected now, so we can switch to it

oktalz avatar Jul 04 '19 16:07 oktalz

from version 1.1.2 controller is using Endpoints api from k8s, and named ports are also supported.

oktalz avatar Jul 09 '19 08:07 oktalz

cool thanks.

git001 avatar Jul 09 '19 08:07 git001

I run into this issue today so it doesn't seem solved. My Service is exposing port 80, the Deployment is using port 3000. HAproxy tried to use the Pod IP with the Service port (which obviously failed). I used this manifest to install haproxy.

As someone who has worked with Kubernetes for the past 2-3 years, I was very surprised that HAproxy was "skipping" the Service and going straight for the Pods. I understand that this can give some benefits from HAproxy-specific features. However, I think it should be opt-in (or opt-out but with a big note), especially if there are unsupported features like named ports.

One more comment: I'm no expert on networking so please correct me if I'm wrong, but I don't think there are any extra hops (maybe some latency) due to Kubernetes services. They use iptables or IPVS as documented here. On the other hand, you risk loosing out on Kubernetes native features by bypassing Services. For example, there is work on topology aware routing, which would make it possible to try to keep network traffic within zones or on the same node. I guess this would be quite a lot of (duplicate) work to add to HAproxy.

These are the resources I used when I run into this:

apiVersion: v1
kind: Service
metadata:
  name: gitea
spec:
  type: ClusterIP
  ports:
    - name: http
      port: 80
      targetPort: http
      protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: gitea
spec:
  template:
    spec:
      containers:
      - name: gitea
        image: gitea/gitea:1.9.4
        ports:
          - name: http
            containerPort: 3000
          - name: ssh
            containerPort: 22
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: gitea
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    haproxy.org/check: "enabled"
    haproxy.org/check-http: "GET /"
    ingress.kubernetes.io/ingress.class: haproxy
spec:
  tls:
    - secretName: gitea-cert
      hosts:
        - gitea.jern.me
  rules:
    - host: gitea.jern.me
      http:
        paths:
          - backend:
              serviceName: gitea
              servicePort: http

lentzi90 avatar Oct 27 '19 09:10 lentzi90

I don't know of any ingress that doesn't talk to endpoints.

Meroje avatar Oct 27 '19 12:10 Meroje

@Meroje what do you mean? Are all ingress-controllers checking endpoints? In that case I'm sorry for criticizing HAproxy but I have never run into a bug like this before and it made me quite worried about HAproxy as ingress-controller.

lentzi90 avatar Oct 27 '19 12:10 lentzi90

While using version 1.2.4 of the haproxy-ingress-controller, it appears that having a Service definition with a named targetPort isn't working just yet.

kind: Deployment
apiVersion: apps/v1

metadata:
  name: echo
spec:
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: jmalloc/echo-server
        ports:
        - name: pod-http
          containerPort: 8080
---
kind: Service
apiVersion: v1

metadata:
  name: echo-svc
spec:
  selector:
    app: echo
  ports:
  - name: http
    port: 80
    targetPort: pod-http # works with 8080
---
kind: Ingress
apiVersion: networking.k8s.io/v1beta1

metadata:
  name: echo-ing
spec:
  rules:
  - host: echo-host
    http:
      paths:
      - path: /
        backend:
          serviceName: echo-svc
          servicePort: http

If the targetPort is changed to use the numeric containerPort value of 8080, it works great.

Perhaps related, perhaps note, there's an added quirk. As has already been mentioned in this thread,the servicePort within the Ingress definition can be changed to pod-http almost without regard to the Service settings:

kind: Deployment
apiVersion: apps/v1

metadata:
  name: echo
spec:
  selector:
    matchLabels:
      app: echo
  template:
    metadata:
      labels:
        app: echo
    spec:
      containers:
      - name: echo
        image: jmalloc/echo-server
        ports:
        - name: pod-http
          containerPort: 8080
---
kind: Service
apiVersion: v1

metadata:
  name: echo-svc
spec:
  selector:
    app: echo
  ports:
  - name: http
    port: 81 # unused?
    targetPort: 8080
---
kind: Ingress
apiVersion: networking.k8s.io/v1beta1

metadata:
  name: echo-ing
spec:
  rules:
  - host: echo-host
    http:
      paths:
      - path: /
        backend:
          serviceName: echo-svc
          servicePort: pod-http

This ingress controller is great and I'm glad you guys are making it happen. I'm just putting this out there for anyone who finds the same issue.

joliver avatar Nov 03 '19 04:11 joliver

Thanks @joliver and @lentzi90 for reporting this. This commit should makes this available. You can give it a try by pulling the dev tag image and let me know if this completely answer you requirements.

Mo3m3n avatar Nov 12 '19 15:11 Mo3m3n

@Mo3m3n the commit solved it for me.

joliver avatar Nov 12 '19 16:11 joliver

Working for me as well, thanks!

lentzi90 avatar Nov 13 '19 17:11 lentzi90