hairpin-proxy
hairpin-proxy copied to clipboard
Getting 400 Bad Request
I've completed the installation with the following commands:
kubectl apply -f https://raw.githubusercontent.com/compumike/hairpin-proxy/v0.1.2/deploy.yml
kubectl -n hairpin-proxy set env deployment/hairpin-proxy-haproxy TARGET_SERVER=ingress-nginx-controller.kube-system.svc.cluster.local
Before installation, the request would time out but now I'm getting 400 Bad Request
.
I couldn't find much so I'm sharing all I have right now. Here is the log form my debug pod:
$ dig mysub.domain.com
; <<>> DiG 9.12.4-P2 <<>> mysub.domain.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12261
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: e77c2f9dd19d168d (echoed)
;; QUESTION SECTION:
;mysub.domain.com. IN A
;; ANSWER SECTION:
mysub.domain.com. 25 IN A 10.106.209.95
;; Query time: 1 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Sun Dec 27 22:40:58 UTC 2020
;; MSG SIZE rcvd: 97
$ dig hairpin-proxy.hairpin-proxy.svc.cluster.local
; <<>> DiG 9.12.4-P2 <<>> hairpin-proxy.hairpin-proxy.svc.cluster.local
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62161
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 5b49c52bd3a873ea (echoed)
;; QUESTION SECTION:
;hairpin-proxy.hairpin-proxy.svc.cluster.local. IN A
;; ANSWER SECTION:
hairpin-proxy.hairpin-proxy.svc.cluster.local. 30 IN A 10.106.209.95
;; Query time: 1 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Sun Dec 27 22:41:16 UTC 2020
;; MSG SIZE rcvd: 147
$ curl mysub.domain.com
<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>
Hairpin proxy pods show no logs. Here is the log line I see in nginx-controller when I send the curl request:
10.44.0.0 - - [27/Dec/2020:22:50:35 +0000] "PROXY TCP4 10.44.0.0 10.44.0.21 1886 80" 400 150 "-" "-" 0 0.001 [] [] - - - - ed328050593d01bbd17e87c3eb3a255d
Here is the raw curl and I've confirmed that 10.106.209.95
belongs to hairpin-proxy service:
$ curl -iv --raw mysub.domain.com
* Rebuilt URL to: mysub.domain.com/
* Trying 10.106.209.95...
* TCP_NODELAY set
* Connected to mysub.domain.com (10.106.209.95) port 80 (#0)
> GET / HTTP/1.1
> Host: mysub.domain.com
> User-Agent: curl/7.61.1
> Accept: */*
>
< HTTP/1.1 400 Bad Request
HTTP/1.1 400 Bad Request
< Date: Sun, 27 Dec 2020 22:51:47 GMT
Date: Sun, 27 Dec 2020 22:51:47 GMT
< Content-Type: text/html
Content-Type: text/html
< Content-Length: 150
Content-Length: 150
< Connection: close
Connection: close
<
<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Closing connection 0
FWIW, I'm able to acces the URL from the public and getting the following in response to kubectl describe challenge
:
Status:
Presented: true
Processing: true
Reason: Waiting for HTTP-01 challenge propagation: wrong status code '400', expected '200'
State: pending
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Started 21m cert-manager Challenge scheduled for processing
Normal Presented 20m cert-manager Presented challenge using HTTP-01 challenge mechanism
I've just realized that curl mysub.domain.com --haproxy-protocol
doesn't work either, which README says it should in the step 0.
$ curl mattermost.muvaf.com --haproxy-protocol
curl: (28) Failed to connect to mysub.domain.com port 80: Operation timed out
My setup is nginx-ingress + MetalLB with one external IP used by many ingress resources.
I ran cert-manager in my laptop and it all worked since it accessed from outside the cluster. I'm trying to set up a dev/test instance so expiration is not an issue. But that's not viable normally for majority of use cases, so I'll keep this issue open and I'd be more than happy to help you debug by reproducing it.
Hi @muvaf I encountered the exact same problem with ingress-nginx and metallb. @compumike could you please have a look on this?
@muvaf just an idea, but do we need to configure proxy protocol for our ingress-nginx?
@shibumi I'm not sure. Here is my values.yaml
for ingress-nginx helm installation if that helps:
controller:
externalTrafficPolicy: "Local"
extraArgs:
enable-ssl-passthrough: ""
kind: DaemonSet
type: LoadBalancer
tolerations:
- effect: NoSchedule
operator: Exists
@muvaf does it help if you set externalTrafficPolicy: Cluster
? The whole purpose of the hairpin proxy is to have working Cluster TrafficPolicy, or am I wrong?
Btw I fixed the whole problem with another solution. Instead of using using the hairpin proxy, I just defined hostAliases for the coreDNS deployment and then ipropagated the /etc/hosts file
@compumike Getting the same error, any solution?
Using metallb + traefik
I'm just gonna say, open for a month, major issue, and the maintainer hasn't commented. :(
@reesericci I uninstalled the project. Instead I am adding host/IP tuples to the /etc/hosts
file of my coredns inside of the cluster.
This manual work is tedious, but it fixes my problem :)
I'm looking currently at k8s_gateway external plugin for coredns, trying to get it to work.
does it help if you set externalTrafficPolicy: Cluster? The whole purpose of the hairpin proxy is to have working Cluster TrafficPolicy, or am I wrong?
@shibumi My goal was to just get the let's encrypt query to work and I did that manually by running the cert-manager in my laptop since it was only a one-time operation for me (test cluster). I don't really know if that'd have worked
@muvaf This was my intention, too. I just added the local ip address + hostname to the /etc/hosts
file from the coredns inside of the cluster. This way coredns distributes their /etc/hosts file to every other service inside of the cluster.
@shibumi @muvaf @reesericci
You can solve the 400 status issue by enabling proxy-protocol on your load balancer by annotating your nginx-ingress and enabling use-proxy-protocl in the controller.