envoy-generic-forward-proxy
envoy-generic-forward-proxy copied to clipboard
Running original_dst from different containers
hello! I'm trying to implement an outbound transparent proxy with envoy, running in different containers.
I'm using this configuration:
"listeners": [
{
"address": "tcp://0.0.0.0:80",
"bind_to_port": false,
"filters": [
{
"type": "read",
"name": "http_connection_manager",
"config": {
"access_log": [
{
"path": "/tmp/envoy.log"
}
],
"codec_type": "auto",
"stat_prefix": "forward_http",
"route_config": {
"virtual_hosts": [
{
"name": "default_http",
"domains": ["*"],
"routes": [
{
"timeout_ms": 0,
"prefix": "/",
"cluster": "outbound_forward_proxy_http"
}
]
}
]
},
"filters": [
{
"type": "decoder",
"name": "router",
"config": {}
}
]
}
}
]
},
{
"address": "tcp://0.0.0.0:15001",
"filters": [],
"bind_to_port": true,
"use_original_dst": true
}
],
"admin": {
"access_log_path": "/tmp/access_log",
"address": "tcp://0.0.0.0:8001"
},
"cluster_manager": {
"clusters": [
{
"name": "outbound_forward_proxy_http",
"connect_timeout_ms": 2500,
"type": "original_dst",
"lb_type": "original_dst_lb"
}
]
}
}
This is an docker-compose example yaml:
version: '2'
services:
envoy:
build:
context: ./envoy
dockerfile: Dockerfile-envoy
cap_add:
- NET_ADMIN
ports:
- "80:80"
expose:
- "80"
application:
build:
context: ./api
dockerfile: Dockerfile-api
cap_add:
- NET_ADMIN
I use IPTABLES in order to redirect all the traffic outgoing from application to envoy. If I use locally (the http requester application and envoy are in the same container) it works like a charm.
In the other hand, if I run envoy in a different container, I receive this message from envoy when I redirect the application requests:
upstream connect error or disconnect/reset before headers. The request arrives to envoy ( I can see in the access_log.
My IPTABLES rules in envoy:
iptables -t nat -N ISTIO_REDIRECT
iptables -t nat -A ISTIO_REDIRECT -p tcp -j REDIRECT --to-port $ENVOY_PORT
iptables -t nat -A PREROUTING -j ISTIO_REDIRECT
iptables -t nat -N ISTIO_OUTPUT
iptables -t nat -A OUTPUT -p tcp -j ISTIO_OUTPUT
iptables -t nat -A ISTIO_OUTPUT -m owner --uid-owner ${ENVOY_UID} -j RETURN
iptables -t nat -A ISTIO_OUTPUT -j ISTIO_REDIRECT.
In application:
iptables -t nat -I OUTPUT -p tcp --dport 80 -j DNAT --to-destination $ENVOY_IP:80
Could you give me an advice how can I configure envoy as a transparent proxy running in another container?
Thanks in advance!
@alejandropal I never tried iptables with docker compose. Could it be that iptables commands are actually "shared" between all the containers, like in the case of a Kubernetes pod. In such a case you only need to run iptables in envoy, and not in the application. If you remove the application's iptables, what do you experience?
Hello Vadim! If I remove the application's iptables entry, the outgoing traffic from the application is going out directly, it is not intercepted by envoy. I suspect envoy is checking some mark on the opened connections which is only set by PREROUTING chain. I'll try changing it and see what happens.
@alejandropal What I do not know is if iptables defined in one docker compose container influence another docker compose container. Can you check it?
Hello Vadim!
Docker creates several virtual network interfaces, one per container by default. An analogy to this concept is the application and envoy proxy are running in different hosts.
Let me explain this by an example:
- Host A runs the application.
- Host A have modified IPTABLES rules in order to be proxied by the B host. iptables -t nat -I OUTPUT -p tcp --dport 80 -j DNAT --to-destination $HOST_B_IP:80
- Host B runs envoy.
We are using this configuration with nginx with no problems, and we are checking if envoy suites our necesities as a transparent proxy.
Hello Alejandro,
Docker creates several virtual network interfaces, one per container by default. An analogy to this concept is the application and envoy proxy are running in different hosts.
I see, so this is the issue. If so, can you try to set "bind_to_port": true in your configuration and to remove the listener on port 15001. No need to run iptables in the envoy's container, only in the application's container.
@alejandropal I meant to set "bind_to_port": true in your configuration and to remove the listener on port 15001. No need to run iptables in the envoy's container, only in the application's container.
Hello Vadim! sorry for the delay.
When I tried that, envoy didn't work as expected.
When I request from app container
root@74c35ef55260:/code/api# curl -v google.com
* Rebuilt URL to: google.com/
* Hostname was NOT found in DNS cache
* Trying 172.217.28.206...
* Connected to google.com (172.217.28.206) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.38.0
> Host: google.com
> Accept: */*
>
< HTTP/1.1 503 Service Unavailable
< content-length: 19
< content-type: text/plain
< date: Tue, 19 Jun 2018 13:03:10 GMT
* Server envoy is not blacklisted
< server: envoy
<
* Connection #0 to host google.com left intact
no healthy upstream
envoy's error_log:
[2018-06-19 13:00:08.882][14][warning][upstream] source/common/upstream/original_dst_cluster.cc:102] original_dst_load_balancer: No downstream connection or no original_dst.
envoy's access_log:
[2018-06-19T12:55:42.690Z] "GET / HTTP/1.1" 503 UH 0 19 1 - "-" "curl/7.38.0" "33081575-0a31-47cd-a06f-3f9447d9c5f9" "google.com" "-"
[2018-06-19T12:56:17.885Z] "GET / HTTP/1.1" 503 UH 0 19 0 - "-" "curl/7.38.0" "cf588b2a-148a-4012-85d1-0d0699566c66" "google.com" "-"
Thanks in advance!
Hello Vadim!
I leave an example with the error of @alejandropal : https://github.com/dvillanustre/meli-envoy
Thanks in advance!
@alejandropal @dvillanustre Sorry for the delayed response, I was very busy this week. Please note that orig_dst cluster can only work in the same pod/container. In your case you cannot use it.
I would recommend you the following - create a regular cluster with www.google.com as a target. Check that it works.
If you need envoy to access arbitrary sites, you can use the solution I described in the README of this repo: https://github.com/vadimeisenbergibm/envoy-generic-forward-proxy#envoy-as-a-generic-forward-proxy-to-other-pods. You would need to deploy an Nginx proxy in addition to Envoy, and use the Nginx's address as the destination of your Envoy's cluster.
@vadimeisenbergibm
@alejandropal @dvillanustre Sorry for the delayed response, I was very busy this week. Please note that
orig_dstcluster can only work in the same pod/container. In your case you cannot use it.I would recommend you the following - create a regular cluster with
www.google.comas a target. Check that it works.If you need envoy to access arbitrary sites, you can use the solution I described in the README of this repo: https://github.com/vadimeisenbergibm/envoy-generic-forward-proxy#envoy-as-a-generic-forward-proxy-to-other-pods. You would need to deploy an Nginx proxy in addition to Envoy, and use the Nginx's address as the destination of your Envoy's cluster.
@vadimeisenbergibm - Good rep.., attach the docker-compose file , also looks like there is an issue with https://github.com/vadimeisenbergibm/envoy-generic-forward-proxy/tree/master/envoy_forward_proxy/envoy_config.json file. help me in correcting it. FYI, i am trying set-up envoy as front-proxy in docker.
PFB docker-compose.yml
version: '2' services: envoy: build: context: . dockerfile: Dockerfile ports: - "80:80" - "443:443" - "8001:8001" expose: - "80" - "443" - "8001"
Logs:
Building envoy
Step 1/8 : FROM
Successfully built 6dae250a9168
Successfully tagged envoy_envoy:latest
WARNING: Image for service envoy was built because it did not already exist. To rebuild this image you must use docker-compose build or docker-compose up --build.
Creating envoy_envoy_1 ... done
Attaching to envoy_envoy_1
envoy_1 | [2019-07-02 09:53:18.776][6][info][main] [source/server/server.cc:205] initializing epoch 0 (hot restart version=10.200.16384.127.options=capacity=16384, num_slots=8209 hash=228984379728933
363 size=2654312)
envoy_1 | [2019-07-02 09:53:18.776][6][info][main] [source/server/server.cc:207] statically linked extensions:
envoy_1 | [2019-07-02 09:53:18.776][6][info][main] [source/server/server.cc:209] access_loggers: envoy.file_access_log,envoy.http_grpc_access_log
envoy_1 | [2019-07-02 09:53:18.776][6][info][main] [source/server/server.cc:212] filters.http: envoy.buffer,envoy.cors,envoy.ext_authz,envoy.fault,envoy.filters.http.grpc_http1_reverse_bridge,envoy
.filters.http.header_to_metadata,envoy.filters.http.jwt_authn,envoy.filters.http.rbac,envoy.filters.http.tap,envoy.grpc_http1_bridge,envoy.grpc_json_transcoder,envoy.grpc_web,envoy.gzip,envoy.health_c
heck,envoy.http_dynamo_filter,envoy.ip_tagging,envoy.lua,envoy.rate_limit,envoy.router,envoy.squash
envoy_1 | [2019-07-02 09:53:18.776][6][info][main] [source/server/server.cc:215] filters.listener: envoy.listener.original_dst,envoy.listener.original_src,envoy.listener.proxy_protocol,envoy.listen
er.tls_inspector
envoy_1 | [2019-07-02 09:53:18.776][6][info][main] [source/server/server.cc:218] filters.network: envoy.client_ssl_auth,envoy.echo,envoy.ext_authz,envoy.filters.network.dubbo_proxy,envoy.filters.ne
twork.mysql_proxy,envoy.filters.network.rbac,envoy.filters.network.sni_cluster,envoy.filters.network.thrift_proxy,envoy.filters.network.zookeeper_proxy,envoy.http_connection_manager,envoy.mongo_proxy,
envoy.ratelimit,envoy.redis_proxy,envoy.tcp_proxy
envoy_1 | [2019-07-02 09:53:18.776][6][info][main] [source/server/server.cc:220] stat_sinks: envoy.dog_statsd,envoy.metrics_service,envoy.stat_sinks.hystrix,envoy.statsd
envoy_1 | [2019-07-02 09:53:18.776][6][info][main] [source/server/server.cc:222] tracers: envoy.dynamic.ot,envoy.lightstep,envoy.tracers.datadog,envoy.zipkin
envoy_1 | [2019-07-02 09:53:18.776][6][info][main] [source/server/server.cc:225] transport_sockets.downstream: envoy.transport_sockets.alts,envoy.transport_sockets.tap,raw_buffer,tls
envoy_1 | [2019-07-02 09:53:18.776][6][info][main] [source/server/server.cc:228] transport_sockets.upstream: envoy.transport_sockets.alts,envoy.transport_sockets.tap,raw_buffer,tls
envoy_1 | [2019-07-02 09:53:18.776][6][info][main] [source/server/server.cc:234] buffer implementation: old (libevent)
envoy_1 | [2019-07-02 09:53:18.777][6][critical][main] [source/server/server.cc:90] error initializing configuration 'envoy_config.json': Unable to parse JSON as proto (INVALID_ARGUMENT:(cluster_mana
ger) clusters: Cannot find field.): {
envoy_1 | "listeners": [
envoy_1 | {
envoy_1 | "address": "tcp://0.0.0.0:80",
envoy_1 | "filters": [
envoy_1 | {
envoy_1 | "type": "read",
envoy_1 | "name": "http_connection_manager",
envoy_1 | "config": {
envoy_1 | "codec_type": "auto",
envoy_1 | "stat_prefix": "forward_http",
envoy_1 | "http1_settings": {
envoy_1 | "allow_absolute_url": true