ingress-nginx icon indicating copy to clipboard operation
ingress-nginx copied to clipboard

grpc authority pseudo header is set to upstream_balancer when service-upstream is true

Open lapwingcloud opened this issue 3 years ago • 12 comments

What happened:

I configured a grpc ingress like this

apiVersion: apps/v1
kind: Deployment
metadata:
  name: fortio
  labels:
    app: fortio
spec:
  replicas: 1
  selector:
    matchLabels:
      app: fortio
  template:
    metadata:
      labels:
        app: fortio
    spec:
      containers:
      - name: fortio
        image: fortio/fortio:1.34.1
        ports:
        - containerPort: 8079
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: fortio-grpc
spec:
  selector:
    app: fortio
  ports:
    - name: grpc
      appProtocol: grpc
      protocol: TCP
      port: 80
      targetPort: 8079
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: fortio-grpc
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
    nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
  ingressClassName: nginx
  rules:
  - host: jchen-test-fortio-grpc.foobar.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: fortio-grpc
            port:
              number: 80

I also did instio inject into the ingress nginx , config

  controller:
    service:
        annotations:
          service.beta.kubernetes.io/azure-load-balancer-internal: "true"
    annotations:
      traffic.sidecar.istio.io/includeInboundPorts: ""
    admissionWebhooks:
      patch:
        podAnnotations:
          sidecar.istio.io/inject: "false"

when I make a grpc ingress from local to the grpc ingress

grpcurl -insecure -import-path . -proto ping.proto -authority jchen-test-fortio-grpc.foobar.com 10.45.0.203:443 fgrpc.PingServer.Ping

I found that in the istio sidecar logs

[2022-07-20T05:42:02.195Z] "POST /fgrpc.PingServer/Ping HTTP/2" 200 - via_upstream - "-" 5 15 2 1 "127.0.0.6" "grpcurl/v1.8.6 grpc-go/1.44.1-dev" "4ae88dc0ce0b18ca6cb850137f1c5f00" "upstream_balancer" "10.0.47.213:80" PassthroughCluster 10.45.1.160:33414 10.0.47.213:80 127.0.0.6:0 - allow_any
1

the authority is upstream_balancer, and it causes the request to be PassthroughCluster instead of routed inside service mesh

this is the corresponding nginx access log

127.0.0.6 - - [20/Jul/2022:05:42:02 +0000] "POST /fgrpc.PingServer/Ping HTTP/2.0" 200 15 "-" "grpcurl/v1.8.6 grpc-go/1.44.1-dev" 119 0.003 [jchen-test-fortio-grpc-80] [] 10.0.47.213:80 57 0.004 200 4ae88dc0ce0b18ca6cb850137f1c5f00

What you expected to happen:

the :authority pseudo header should be jchen-test-fortio-grpc.foobar.com instead of upstream_balancer

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

kubectl exec -it jchen-test-ingress-nginx-controller-876d8646f-spkdn -- /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.3.0
  Build:         2b7b74854d90ad9b4b96a5011b9e8b67d20bfb8f
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.19.10

-------------------------------------------------------------------------------

Kubernetes version (use kubectl version):

kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:45:37Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"0ce33d4c6cd3c838648f245dba25f78a2a427fac", GitTreeState:"clean", BuildDate:"2022-06-17T15:23:06Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: Azure AKS
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools: helm + argocd (ingress-nginx chart version 4.2.0)
    • Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
  • Basic cluster related info:
    • kubectl version
    • kubectl get nodes -o wide
kubectl get nodes -o wide
NAME                                   STATUS   ROLES   AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
aks-general-24046850-vmss000000        Ready    agent   34d   v1.22.4   10.45.1.52    <none>        Ubuntu 18.04.6 LTS   5.4.0-1080-azure   containerd://1.5.11+azure-1
aks-general-24046850-vmss000001        Ready    agent   34d   v1.22.4   10.45.1.153   <none>        Ubuntu 18.04.6 LTS   5.4.0-1085-azure   containerd://1.5.11+azure-1
aks-general-24046850-vmss000002        Ready    agent   34d   v1.22.4   10.45.1.254   <none>        Ubuntu 18.04.6 LTS   5.4.0-1080-azure   containerd://1.5.11+azure-1
aks-gitlabrunner-27009483-vmss000000   Ready    agent   15d   v1.22.4   10.45.0.104   <none>        Ubuntu 18.04.6 LTS   5.4.0-1083-azure   containerd://1.5.11+azure-2
aks-system-40813308-vmss000000         Ready    agent   34d   v1.22.4   10.45.0.5     <none>        Ubuntu 18.04.6 LTS   5.4.0-1080-azure   containerd://1.5.11+azure-1
aks-system-40813308-vmss000002         Ready    agent   34d   v1.22.4   10.45.0.207   <none>        Ubuntu 18.04.6 LTS   5.4.0-1080-azure   containerd://1.5.11+azure-1
aks-system-40813308-vmss000004         Ready    agent   34d   v1.22.4   10.45.2.186   <none>        Ubuntu 18.04.6 LTS   5.4.0-1080-azure   containerd://1.5.11+azure-1

How to reproduce this issue:

you can use this repository to reproduce https://github.com/jchenship/yages/tree/jchen-reproduce-nginx-grpc-authority

key steps

install ingress-nginx in minikube

minikube start
minikube addons enable ingress

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx
echo "waiting the ingress nginx to be ready"
kubectl wait pod -l app.kubernetes.io/name=ingress-nginx --for=condition=ready --timeout=300s

build a grpc server that can echo the :authority header

and apply this manifests in k8s

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: yages
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
    nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
  ingressClassName: nginx
  tls:
    - secretName: yages-tls
      hosts:
        - yages.example.com
  rules:
    - host: yages.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: yages
                port:
                  number: 80
---
apiVersion: v1
kind: Service
metadata:
  name: yages
spec:
  selector:
    app: yages
  ports:
    - name: grpc
      appProtocol: grpc
      protocol: TCP
      port: 80
      targetPort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: yages
  labels:
    app: yages
spec:
  replicas: 1
  selector:
    matchLabels:
      app: yages
  template:
    metadata:
      labels:
        app: yages
    spec:
      containers:
        - name: yages
          image: jchenship/yages:0.1.0
          ports:
            - containerPort: 9000

Anything else we need to know:

N/A

lapwingcloud avatar Jul 20 '22 06:07 lapwingcloud

@jchenship: This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Jul 20 '22 06:07 k8s-ci-robot

created a repository to reproduce the issue https://github.com/jchenship/yages/tree/jchen-reproduce-nginx-grpc-authority

lapwingcloud avatar Jul 20 '22 08:07 lapwingcloud

this is actually might be a nginx issue, can reproduce in this nginx.conf

user  nginx;
worker_processes  1;

error_log  /dev/stdout debug;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_host" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;
    server {
        listen 9001 ssl http2;

        ssl_certificate /etc/nginx/ca.crt;
        ssl_certificate_key /etc/nginx/ca.key;

        location / {
            grpc_pass grpc://127.0.0.1:9000;
        }
    }
}

and making the following query the authority received by the server is 127.0.0.1:9000

grpcurl -insecure -proto yages-schema.proto -authority yages.example.com localhost:9001 yages.Echo.Ping
{
  "text": "127.0.0.1:9000"
}

if adding grpc_set_header Host $http_host it will cause the authority header to be missing, same as what is mentioned in https://github.com/kubernetes/ingress-nginx/issues/3706

I did some internet search, didn't find an official way to use grpc_set_header to set the authority header, but I found kong is resolving this issue by using lua script https://github.com/Kong/kong/pull/6603

lapwingcloud avatar Jul 20 '22 11:07 lapwingcloud

/remove-kind bug

  • Do you see the same problem (wrong authority header) if you try the grpc example
  • Its not related but don't you need TLS for HTTP2 as most implementation of GRPC are over TLS
  • Do you absolutely need to use appProtocol: grpc, although seems unrelated to the header. Can you try the example grpc from docs as is without this appProtocol: grpc as well, just to get a baseline

longwuyuan avatar Jul 20 '22 17:07 longwuyuan

Do you see the same problem (wrong authority header) if you try the grpc example

Will try but the grpc example is extremely similar to my steps to reproduce

Do you absolutely need to use appProtocol: grpc, although seems unrelated to the header. Can you try the example grpc from docs as is without this appProtocol: grpc as well, just to get a baseline

I removed the appProtocol: grpc it has the same problem

Its not related but don't you need TLS for HTTP2 as most implementation of GRPC are over TLS

I did use GRPC over TLS (I'm always going to ingress-nginx's 443 port) , where did you see TLS is not used ? grpc -insecure is because I used a self signed certificate, but it still uses TLS.

lapwingcloud avatar Jul 20 '22 23:07 lapwingcloud

Do you see the same problem (wrong authority header) if you try the grpc example

I just tried, yes it has the same problem (wrong authority header)

go code (just changed the return message to be :authority header)

package main

import (
        "context"
        "flag"
        "fmt"
        "log"
        "net"

        "google.golang.org/grpc"
        "google.golang.org/grpc/metadata"
        "google.golang.org/grpc/reflection"

        ecpb "google.golang.org/grpc/examples/features/proto/echo"
        hwpb "google.golang.org/grpc/examples/helloworld/helloworld"
)

var port = flag.Int("port", 50051, "the port to serve on")

// hwServer is used to implement helloworld.GreeterServer.
type hwServer struct {
        hwpb.UnimplementedGreeterServer
}

// SayHello implements helloworld.GreeterServer
func (s *hwServer) SayHello(ctx context.Context, in *hwpb.HelloRequest) (*hwpb.HelloReply, error) {
        md, _ := metadata.FromIncomingContext(ctx)
        var message string
        if len(md.Get(":authority")) == 0 {
                message = "request doesn't container :authority pseudo header"
        } else {
                message = md.Get(":authority")[0]
        }
        return &hwpb.HelloReply{Message: message}, nil
}

type ecServer struct {
        ecpb.UnimplementedEchoServer
}

func (s *ecServer) UnaryEcho(ctx context.Context, req *ecpb.EchoRequest) (*ecpb.EchoResponse, error) {
        return &ecpb.EchoResponse{Message: req.Message}, nil
}

func main() {
        flag.Parse()
        lis, err := net.Listen("tcp", fmt.Sprintf(":%d", *port))
        if err != nil {
                log.Fatalf("failed to listen: %v", err)
        }
        fmt.Printf("server listening at %v\n", lis.Addr())

        s := grpc.NewServer()

        // Register Greeter on the server.
        hwpb.RegisterGreeterServer(s, &hwServer{})

        // Register RouteGuide on the same server.
        ecpb.RegisterEchoServer(s, &ecServer{})

        // Register reflection service on gRPC server.
        reflection.Register(s)

        if err := s.Serve(lis); err != nil {
                log.Fatalf("failed to serve: %v", err)
        }
}

kubernetes manifests

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: go-grpc-greeter-server
  name: go-grpc-greeter-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: go-grpc-greeter-server
  template:
    metadata:
      labels:
        app: go-grpc-greeter-server
    spec:
      containers:
      - image: [redacted]go-grpc-greeter-server
        resources:
          limits:
            cpu: 100m
            memory: 100Mi
          requests:
            cpu: 50m
            memory: 50Mi
        name: go-grpc-greeter-server
        ports:
        - containerPort: 50051
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: go-grpc-greeter-server
  name: go-grpc-greeter-server
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 50051
  selector:
    app: go-grpc-greeter-server
  type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
  name: fortune-ingress
spec:
  ingressClassName: nginx
  rules:
  - host: go-grpc-greeter-server.[redacted].com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: go-grpc-greeter-server
            port:
              number: 80
  tls:
  - secretName: wildcard.[redacted].com
    hosts:
      - go-grpc-greeter-server.[redacted].com

command

grpcurl go-grpc-greeter-server.[redacted].com:443 helloworld.Greeter/SayHello
{
  "message": "upstream_balancer"
}

lapwingcloud avatar Jul 21 '22 00:07 lapwingcloud

/assign @rikatz

strongjz avatar Jul 21 '22 15:07 strongjz

bump for update , can you please at least try to reproduce and triage it first ? thanks

lapwingcloud avatar Jul 25 '22 06:07 lapwingcloud

Just investigated a little bit more, I believe we need to add grpc_set_header Host $http_host again, which was removed in https://github.com/kubernetes/ingress-nginx/issues/3706 , because I can verify with grpc_set_header Host $http_host , the upstream can receive the :authority header correctly.

lapwingcloud avatar Aug 06 '22 12:08 lapwingcloud

also verified this issue from the e2e test assertion I added in the PR, e.g. if the nginx.tmpl is on main branch, while running this e2e test with assertion of :authority header to be host variable value https://github.com/kubernetes/ingress-nginx/pull/8912/files#diff-f907d78a8e7b472d34a4861d9df89d8c12fbe261ee27709380515d9f2bd8fbd5R123

it will fail, here is the output https://gist.github.com/jchenship/6e7b420d10149511badbf51132224f6b

which clearly shows that the :authority header is set to upstream_balancer

lapwingcloud avatar Aug 07 '22 00:08 lapwingcloud

Just a kind follow up on this issue @rikatz , let me know if there's anything else I can help, thanks in advance

lapwingcloud avatar Aug 13 '22 14:08 lapwingcloud

Just a kind follow up on this issue @rikatz , sorry for bumping again but I think it's important for the grpc servers to receive correct host header in the default configuration. Thank you.

lapwingcloud avatar Sep 13 '22 09:09 lapwingcloud

just fyi this is how apisix did it https://github.com/apache/apisix/pull/7939/files

lapwingcloud avatar Dec 10 '22 12:12 lapwingcloud

Having a similar issue, this is the only discussion I can find that is relevant.

I'm using a ASP.NET web server (net6) hosting both an GRPC service and a normal HTTP service. I started to see issues with upstream_balancer in some of my URL, and realized it comes from the HttpContext.Request.Host property, which should reflect the HOST header (not authority, not sure what that is).

If I remove the nginx.ingress.kubernetes.io/backend-protocol: "GRPC" annotation on my ingress and route back to an HTTP1 listener, error stops and the Host property report the real URL instead. But of course, the GRPC service stops functioning.

Is there any way to fix this? It's kinda a huge deal breaker for my application. I'm using ingress-nginx via Helm chart deployment.

Dunge avatar Mar 31 '23 00:03 Dunge

The PR has been merged and will be included in our next release.

tao12345666333 avatar Jun 27 '23 06:06 tao12345666333