consul
consul copied to clipboard
Make tracing parameter configurable on ingress GW
Closes: #8519
This changeset allows tracing configuration to be specified as part of ingress-gateway connect config.
@blake would you be able to take a look at this and tell me if my approach is acceptable? More tests and documentation coming up.
~I am not able to run tests locally because of some lint issues in other parts of the codebase, do I need to do anything special to ignore those issues or should make test have worked without any error?~
Never mind, I was running into this: https://golang.org/doc/go1.15#vet Go 1.14 runs tests just fine.
@blake any update here? Do you think someone can review it anytime soon?
@freddygv thanks for the review. I’ll add the tests and docs soon.
@freddygv ready for review.
@freddygv thanks for the review. Can you take a look at the integration tests please? For some reason the envoy process for ingress gateway is not starting the listener, I am not able to figure out why is that.
I went through all the logs that are captured by the test harness but can't spot any problem. The services are being registered properly, and so is ingress gateway config, all envoy processes respond on the admin endpoint but the gateway listener doesn't come up. When I query the /listeners admin endpoint I get back an empty response, and the /config_dump also shows empty listeners list.
I also tried splitting the tests into one case per permutation, but it still behaved in the same way.
Here is the test output. The last bits are for debugging and I'll remove those after fixing the problem:
Test output
=== RUN TestEnvoy/case-ingress-gateway-custom-tracing
Setting up the primary datacenter
Registered service: ingress-gateway-client-sampling-0
Registered service: ingress-gateway-client-sampling-100
Registered service: ingress-gateway-random-sampling-0
Registered service: ingress-gateway-random-sampling-100
Registered service: s1
Registered service: s2
Starting services
c2c53bf2a05d1c78633fcca4a4bc4407231639771dcea4b5459a558a1b507eea
Killing and removing: envoy_verify-primary_1...done
Running primary verification step for case-ingress-gateway-custom-tracing...
1..10
ok 1 proxy admin endpoint is up on :20000
ok 2 proxy admin endpoint is up on :20001
ok 3 proxy admin endpoint is up on :20002
ok 4 proxy admin endpoint is up on :20003
ok 5 proxy admin endpoint is up on :19000
not ok 6 random sampling with 100% should send traces to zipkin/jaeger
# (in test file /workdir/primary/bats/verify.bats, line 30)
# `[ "$status" == "0" ]' failed with status 7
# OUTPUT
not ok 7 client sampling with 0% should send traces to zipkin/jaeger
# (in test file /workdir/primary/bats/verify.bats, line 42)
# `[ "$status" == "0" ]' failed with status 7
# OUTPUT
not ok 8 random sampling with 0% should not send traces to zipkin/jaeger
# (in test file /workdir/primary/bats/verify.bats, line 54)
# `[ "$status" == "0" ]' failed with status 7
# OUTPUT
not ok 9 client sampling with 100% should not send traces to zipkin/jaeger
# (in test file /workdir/primary/bats/verify.bats, line 66)
# `[ "$status" == "0" ]' failed with status 7
# OUTPUT
not ok 10 verify gateway listeners
# (in test file /workdir/primary/bats/verify.bats, line 87)
# `[ "0" == "1" ]' failed with status 7
# LISTENER 9990: * Rebuilt URL to: localhost:9990/
# * Trying 127.0.0.1...
# * TCP_NODELAY set
# * connect to 127.0.0.1 port 9990 failed: Connection refused
# * Trying ::1...
# * TCP_NODELAY set
# * Immediate connect fail for ::1: Address not available
# * Trying ::1...
# * TCP_NODELAY set
# * Immediate connect fail for ::1: Address not available
# * Failed to connect to localhost port 9990: Connection refused
# * Closing connection 0
# LISTENER 9991: * Rebuilt URL to: localhost:9991/
# * Trying 127.0.0.1...
# * TCP_NODELAY set
# * connect to 127.0.0.1 port 9991 failed: Connection refused
# * Trying ::1...
# * TCP_NODELAY set
# * Immediate connect fail for ::1: Address not available
# * Trying ::1...
# * TCP_NODELAY set
# * Immediate connect fail for ::1: Address not available
# * Failed to connect to localhost port 9991: Connection refused
# * Closing connection 0
# LISTENER 9992: * Rebuilt URL to: localhost:9992/
# * Trying 127.0.0.1...
# * TCP_NODELAY set
# * connect to 127.0.0.1 port 9992 failed: Connection refused
# * Trying ::1...
# * TCP_NODELAY set
# * Immediate connect fail for ::1: Address not available
# * Trying ::1...
# * TCP_NODELAY set
# * Immediate connect fail for ::1: Address not available
# * Failed to connect to localhost port 9992: Connection refused
# * Closing connection 0
# LISTENER 9993: * Rebuilt URL to: localhost:9993/
# * Trying 127.0.0.1...
# * TCP_NODELAY set
# * connect to 127.0.0.1 port 9993 failed: Connection refused
# * Trying ::1...
# * TCP_NODELAY set
# * Immediate connect fail for ::1: Address not available
# * Trying ::1...
# * TCP_NODELAY set
# * Immediate connect fail for ::1: Address not available
# * Failed to connect to localhost port 9993: Connection refused
# * Closing connection 0
⨯ FAIL
ERR: command exited with status 1
command: return $res
line: 299
function: run_tests
called at: ./run-tests.sh:617
main_test.go:35: command failed: exit status 1
Consul Agent Logs
==> Starting Consul agent...
Version: '1.10.0-dev'
Node ID: '46c1a80b-01e4-4ee3-aef0-31143768e942'
Node name: 'consul-primary'
Datacenter: 'primary' (Segment: '<all>')
Server: true (Bootstrap: false)
Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, gRPC: 8502, DNS: 8600)
Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false
==> Log data will now stream in as it occurs:
2021-02-24T12:34:13.703Z [INFO] agent.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:46c1a80b-01e4-4ee3-aef0-31143768e942 Address:127.0.0.1:8300}]"
2021-02-24T12:34:13.704Z [INFO] agent.server.serf.wan: serf: EventMemberJoin: consul-primary.primary 127.0.0.1
2021-02-24T12:34:13.704Z [INFO] agent.server.serf.lan: serf: EventMemberJoin: consul-primary 127.0.0.1
2021-02-24T12:34:13.704Z [INFO] agent.router: Initializing LAN area manager
2021-02-24T12:34:13.705Z [INFO] agent: Started DNS server: address=0.0.0.0:8600 network=udp
2021-02-24T12:34:13.706Z [INFO] agent.server.raft: entering follower state: follower="Node at 127.0.0.1:8300 [Follower]" leader=
2021-02-24T12:34:13.707Z [INFO] agent.server: Adding LAN server: server="consul-primary (Addr: tcp/127.0.0.1:8300) (DC: primary)"
2021-02-24T12:34:13.707Z [INFO] agent.server: Handled event for server in area: event=member-join server=consul-primary.primary area=wan
2021-02-24T12:34:13.708Z [INFO] agent: Started DNS server: address=0.0.0.0:8600 network=tcp
2021-02-24T12:34:13.709Z [INFO] agent: Starting server: address=[::]:8500 network=tcp protocol=http
2021-02-24T12:34:13.709Z [WARN] agent: DEPRECATED Backwards compatibility with pre-1.9 metrics enabled. These metrics will be removed in a future version of Consul. Set `telemetry { disable_compat_1.9 = true }` to disable them.
2021-02-24T12:34:13.710Z [INFO] agent: started state syncer
==> Consul agent running!
2021-02-24T12:34:13.711Z [INFO] agent: Started gRPC server: address=[::]:8502 network=tcp
2021-02-24T12:34:13.771Z [WARN] agent.server.raft: heartbeat timeout reached, starting election: last-leader=
2021-02-24T12:34:13.771Z [INFO] agent.server.raft: entering candidate state: node="Node at 127.0.0.1:8300 [Candidate]" term=2
2021-02-24T12:34:13.771Z [DEBUG] agent.server.raft: votes: needed=1
2021-02-24T12:34:13.771Z [DEBUG] agent.server.raft: vote granted: from=46c1a80b-01e4-4ee3-aef0-31143768e942 term=2 tally=1
2021-02-24T12:34:13.771Z [INFO] agent.server.raft: election won: tally=1
2021-02-24T12:34:13.771Z [INFO] agent.server.raft: entering leader state: leader="Node at 127.0.0.1:8300 [Leader]"
2021-02-24T12:34:13.771Z [INFO] agent.server: cluster leadership acquired
2021-02-24T12:34:13.772Z [INFO] agent.server: New leader elected: payload=consul-primary
2021-02-24T12:34:13.773Z [DEBUG] agent.server: Cannot upgrade to new ACLs: leaderMode=0 mode=0 found=true leader=127.0.0.1:8300
2021-02-24T12:34:13.778Z [INFO] agent.leader: started routine: routine="federation state anti-entropy"
2021-02-24T12:34:13.778Z [INFO] agent.leader: started routine: routine="federation state pruning"
2021-02-24T12:34:13.778Z [DEBUG] agent.server.autopilot: autopilot is now running
2021-02-24T12:34:13.778Z [DEBUG] agent.server.autopilot: state update routine is now running
2021-02-24T12:34:13.780Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
2021-02-24T12:34:13.790Z [INFO] agent.server.connect: initialized primary datacenter CA with provider: provider=consul
2021-02-24T12:34:13.790Z [INFO] agent.leader: started routine: routine="intermediate cert renew watch"
2021-02-24T12:34:13.790Z [INFO] agent.leader: started routine: routine="CA root pruning"
2021-02-24T12:34:13.795Z [DEBUG] agent.server: successfully established leadership: duration=22.1073ms
2021-02-24T12:34:13.795Z [INFO] agent.server: member joined, marking health alive: member=consul-primary
2021-02-24T12:34:13.953Z [INFO] agent.server: federation state anti-entropy synced
2021-02-24T12:34:14.048Z [DEBUG] agent.http: Request finished: method=GET url=/v1/config/ingress-gateway/ingress-gateway-random-sampling-0 from=127.0.0.1:48206 latency=337.3µs
2021-02-24T12:34:14.144Z [DEBUG] agent: Skipping remote check since it is managed automatically: check=serfHealth
2021-02-24T12:34:14.145Z [INFO] agent: Synced node info
2021-02-24T12:34:14.145Z [DEBUG] agent: Node info in sync
2021-02-24T12:34:14.558Z [DEBUG] agent.http: Request finished: method=GET url=/v1/config/ingress-gateway/ingress-gateway-random-sampling-100 from=127.0.0.1:48208 latency=177.1µs
2021-02-24T12:34:15.054Z [DEBUG] agent.http: Request finished: method=GET url=/v1/config/ingress-gateway/ingress-gateway-client-sampling-0 from=127.0.0.1:48210 latency=169.9µs
2021-02-24T12:34:15.542Z [DEBUG] agent.http: Request finished: method=GET url=/v1/config/ingress-gateway/ingress-gateway-client-sampling-100 from=127.0.0.1:48212 latency=174.5µs
2021-02-24T12:34:15.604Z [DEBUG] agent: Skipping remote check since it is managed automatically: check=serfHealth
2021-02-24T12:34:15.604Z [DEBUG] agent: Node info in sync
2021-02-24T12:34:15.890Z [DEBUG] agent: added local registration for service: service=ingress-gateway-client-sampling-0
2021-02-24T12:34:15.890Z [DEBUG] agent: Node info in sync
2021-02-24T12:34:15.892Z [INFO] agent: Synced service: service=ingress-gateway-client-sampling-0
2021-02-24T12:34:15.892Z [DEBUG] agent: Node info in sync
2021-02-24T12:34:15.893Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-0
2021-02-24T12:34:15.893Z [DEBUG] agent.http: Request finished: method=PUT url=/v1/agent/service/register from=127.0.0.1:48214 latency=11.9532ms
2021-02-24T12:34:15.897Z [DEBUG] agent: Node info in sync
2021-02-24T12:34:15.897Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-0
2021-02-24T12:34:15.900Z [DEBUG] agent: added local registration for service: service=ingress-gateway-client-sampling-100
2021-02-24T12:34:15.900Z [DEBUG] agent: Node info in sync
2021-02-24T12:34:15.900Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-0
2021-02-24T12:34:15.901Z [INFO] agent: Synced service: service=ingress-gateway-client-sampling-100
2021-02-24T12:34:15.902Z [DEBUG] agent.http: Request finished: method=PUT url=/v1/agent/service/register from=127.0.0.1:48214 latency=7.4609ms
2021-02-24T12:34:15.902Z [DEBUG] agent: Node info in sync
2021-02-24T12:34:15.903Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-0
2021-02-24T12:34:15.903Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-100
2021-02-24T12:34:15.905Z [DEBUG] agent: Node info in sync
2021-02-24T12:34:15.905Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-0
2021-02-24T12:34:15.905Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-100
2021-02-24T12:34:15.910Z [DEBUG] agent: added local registration for service: service=ingress-gateway-random-sampling-0
2021-02-24T12:34:15.910Z [DEBUG] agent: Node info in sync
2021-02-24T12:34:15.910Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-0
2021-02-24T12:34:15.910Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-100
2021-02-24T12:34:15.914Z [INFO] agent: Synced service: service=ingress-gateway-random-sampling-0
2021-02-24T12:34:15.914Z [DEBUG] agent.http: Request finished: method=PUT url=/v1/agent/service/register from=127.0.0.1:48214 latency=9.3268ms
2021-02-24T12:34:15.916Z [DEBUG] agent: Node info in sync
2021-02-24T12:34:15.916Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-100
2021-02-24T12:34:15.917Z [DEBUG] agent: Service in sync: service=ingress-gateway-random-sampling-0
2021-02-24T12:34:15.917Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-0
2021-02-24T12:34:15.920Z [DEBUG] agent: Node info in sync
2021-02-24T12:34:15.920Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-0
2021-02-24T12:34:15.921Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-100
2021-02-24T12:34:15.921Z [DEBUG] agent: Service in sync: service=ingress-gateway-random-sampling-0
2021-02-24T12:34:15.929Z [DEBUG] agent: added local registration for service: service=ingress-gateway-random-sampling-100
2021-02-24T12:34:15.929Z [DEBUG] agent: Node info in sync
2021-02-24T12:34:15.930Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-0
2021-02-24T12:34:15.930Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-100
2021-02-24T12:34:15.930Z [DEBUG] agent: Service in sync: service=ingress-gateway-random-sampling-0
2021-02-24T12:34:15.931Z [INFO] agent: Synced service: service=ingress-gateway-random-sampling-100
2021-02-24T12:34:15.932Z [DEBUG] agent: Node info in sync
2021-02-24T12:34:15.932Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-0
2021-02-24T12:34:15.932Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-100
2021-02-24T12:34:15.932Z [DEBUG] agent: Service in sync: service=ingress-gateway-random-sampling-0
2021-02-24T12:34:15.932Z [DEBUG] agent: Service in sync: service=ingress-gateway-random-sampling-100
2021-02-24T12:34:15.933Z [DEBUG] agent.http: Request finished: method=PUT url=/v1/agent/service/register from=127.0.0.1:48214 latency=17.3063ms
2021-02-24T12:34:15.937Z [DEBUG] agent: Node info in sync
2021-02-24T12:34:15.938Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-0
2021-02-24T12:34:15.938Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-100
2021-02-24T12:34:15.938Z [DEBUG] agent: Service in sync: service=ingress-gateway-random-sampling-0
2021-02-24T12:34:15.938Z [DEBUG] agent: Service in sync: service=ingress-gateway-random-sampling-100
2021-02-24T12:34:15.940Z [INFO] agent: Synced service: service=s1
2021-02-24T12:34:15.947Z [DEBUG] agent: added local registration for service: service=s1-sidecar-proxy
2021-02-24T12:34:15.947Z [DEBUG] agent: Node info in sync
2021-02-24T12:34:15.949Z [INFO] agent: Synced service: service=s1-sidecar-proxy
2021-02-24T12:34:15.949Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-0
2021-02-24T12:34:15.949Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-100
2021-02-24T12:34:15.949Z [DEBUG] agent: Service in sync: service=ingress-gateway-random-sampling-0
2021-02-24T12:34:15.949Z [DEBUG] agent: Service in sync: service=ingress-gateway-random-sampling-100
2021-02-24T12:34:15.949Z [DEBUG] agent: Service in sync: service=s1
2021-02-24T12:34:15.949Z [DEBUG] agent: Check in sync: check=service:s1-sidecar-proxy:1
2021-02-24T12:34:15.949Z [DEBUG] agent: Check in sync: check=service:s1-sidecar-proxy:2
2021-02-24T12:34:15.949Z [DEBUG] agent.http: Request finished: method=PUT url=/v1/agent/service/register from=127.0.0.1:48214 latency=14.2279ms
2021-02-24T12:34:15.950Z [DEBUG] agent: Node info in sync
2021-02-24T12:34:15.950Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-0
2021-02-24T12:34:15.950Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-100
2021-02-24T12:34:15.950Z [DEBUG] agent: Service in sync: service=ingress-gateway-random-sampling-0
2021-02-24T12:34:15.950Z [DEBUG] agent: Service in sync: service=ingress-gateway-random-sampling-100
2021-02-24T12:34:15.950Z [DEBUG] agent: Service in sync: service=s1
2021-02-24T12:34:15.950Z [DEBUG] agent: Service in sync: service=s1-sidecar-proxy
2021-02-24T12:34:15.951Z [DEBUG] agent: Check in sync: check=service:s1-sidecar-proxy:1
2021-02-24T12:34:15.952Z [INFO] agent: Synced check: check=service:s1-sidecar-proxy:2
2021-02-24T12:34:15.958Z [DEBUG] agent: Node info in sync
2021-02-24T12:34:15.958Z [DEBUG] agent: Service in sync: service=s1-sidecar-proxy
2021-02-24T12:34:15.960Z [INFO] agent: Synced service: service=s2
2021-02-24T12:34:15.960Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-0
2021-02-24T12:34:15.960Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-100
2021-02-24T12:34:15.960Z [DEBUG] agent: Service in sync: service=ingress-gateway-random-sampling-0
2021-02-24T12:34:15.960Z [DEBUG] agent: Service in sync: service=ingress-gateway-random-sampling-100
2021-02-24T12:34:15.960Z [DEBUG] agent: Service in sync: service=s1
2021-02-24T12:34:15.960Z [DEBUG] agent: Check in sync: check=service:s1-sidecar-proxy:1
2021-02-24T12:34:15.960Z [DEBUG] agent: Check in sync: check=service:s1-sidecar-proxy:2
2021-02-24T12:34:15.965Z [DEBUG] agent: added local registration for service: service=s2-sidecar-proxy
2021-02-24T12:34:15.965Z [DEBUG] agent: Node info in sync
2021-02-24T12:34:15.965Z [DEBUG] agent: Service in sync: service=ingress-gateway-random-sampling-100
2021-02-24T12:34:15.965Z [DEBUG] agent: Service in sync: service=s1
2021-02-24T12:34:15.965Z [DEBUG] agent: Service in sync: service=s1-sidecar-proxy
2021-02-24T12:34:15.965Z [DEBUG] agent: Service in sync: service=s2
2021-02-24T12:34:15.966Z [INFO] agent: Synced service: service=s2-sidecar-proxy
2021-02-24T12:34:15.967Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-0
2021-02-24T12:34:15.967Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-100
2021-02-24T12:34:15.967Z [DEBUG] agent: Service in sync: service=ingress-gateway-random-sampling-0
2021-02-24T12:34:15.967Z [DEBUG] agent: Check in sync: check=service:s2-sidecar-proxy:2
2021-02-24T12:34:15.967Z [DEBUG] agent: Check in sync: check=service:s1-sidecar-proxy:1
2021-02-24T12:34:15.968Z [DEBUG] agent: Check in sync: check=service:s1-sidecar-proxy:2
2021-02-24T12:34:15.968Z [DEBUG] agent: Check in sync: check=service:s2-sidecar-proxy:1
2021-02-24T12:34:15.968Z [DEBUG] agent.http: Request finished: method=PUT url=/v1/agent/service/register from=127.0.0.1:48214 latency=18.5586ms
2021-02-24T12:34:15.974Z [DEBUG] agent: Node info in sync
2021-02-24T12:34:15.975Z [DEBUG] agent: Service in sync: service=ingress-gateway-random-sampling-0
2021-02-24T12:34:15.975Z [DEBUG] agent: Service in sync: service=ingress-gateway-random-sampling-100
2021-02-24T12:34:15.975Z [DEBUG] agent: Service in sync: service=s1
2021-02-24T12:34:15.976Z [DEBUG] agent: Service in sync: service=s1-sidecar-proxy
2021-02-24T12:34:15.976Z [DEBUG] agent: Service in sync: service=s2
2021-02-24T12:34:15.976Z [DEBUG] agent: Service in sync: service=s2-sidecar-proxy
2021-02-24T12:34:15.976Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-0
2021-02-24T12:34:15.977Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-100
2021-02-24T12:34:15.977Z [DEBUG] agent: Check in sync: check=service:s1-sidecar-proxy:2
2021-02-24T12:34:15.977Z [DEBUG] agent: Check in sync: check=service:s2-sidecar-proxy:1
2021-02-24T12:34:15.978Z [INFO] agent: Synced check: check=service:s2-sidecar-proxy:2
2021-02-24T12:34:15.979Z [DEBUG] agent: Check in sync: check=service:s1-sidecar-proxy:1
2021-02-24T12:34:15.982Z [DEBUG] agent: Node info in sync
2021-02-24T12:34:15.983Z [DEBUG] agent: Service in sync: service=s2
2021-02-24T12:34:15.986Z [DEBUG] agent: Service in sync: service=s2-sidecar-proxy
2021-02-24T12:34:15.986Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-0
2021-02-24T12:34:15.987Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-100
2021-02-24T12:34:15.987Z [DEBUG] agent: Service in sync: service=ingress-gateway-random-sampling-0
2021-02-24T12:34:15.987Z [DEBUG] agent: Service in sync: service=ingress-gateway-random-sampling-100
2021-02-24T12:34:15.987Z [DEBUG] agent: Service in sync: service=s1
2021-02-24T12:34:15.987Z [DEBUG] agent: Service in sync: service=s1-sidecar-proxy
2021-02-24T12:34:15.987Z [DEBUG] agent: Check in sync: check=service:s2-sidecar-proxy:1
2021-02-24T12:34:15.987Z [DEBUG] agent: Check in sync: check=service:s2-sidecar-proxy:2
2021-02-24T12:34:15.987Z [DEBUG] agent: Check in sync: check=service:s1-sidecar-proxy:1
2021-02-24T12:34:15.988Z [DEBUG] agent: Check in sync: check=service:s1-sidecar-proxy:2
2021-02-24T12:34:16.419Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/self from=127.0.0.1:48216 latency=2.6201ms
2021-02-24T12:34:16.425Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/service/ingress-gateway-random-sampling-0 from=127.0.0.1:48216 latency=1.5833ms
2021-02-24T12:34:16.962Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/self from=127.0.0.1:48218 latency=1.4195ms
2021-02-24T12:34:16.966Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/service/ingress-gateway-random-sampling-100 from=127.0.0.1:48218 latency=109.9µs
2021-02-24T12:34:17.432Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/self from=127.0.0.1:48220 latency=764.4µs
2021-02-24T12:34:17.435Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/service/ingress-gateway-client-sampling-0 from=127.0.0.1:48220 latency=130.1µs
2021-02-24T12:34:17.949Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/self from=127.0.0.1:48222 latency=698µs
2021-02-24T12:34:17.952Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/service/ingress-gateway-client-sampling-100 from=127.0.0.1:48222 latency=108.7µs
2021-02-24T12:34:18.446Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/self from=127.0.0.1:48224 latency=638.8µs
2021-02-24T12:34:18.448Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/service/s1-sidecar-proxy from=127.0.0.1:48224 latency=63.7µs
2021-02-24T12:34:18.491Z [WARN] agent: Check socket connection failed: check=service:s2-sidecar-proxy:1 error="dial tcp 127.0.0.1:21001: connect: connection refused"
2021-02-24T12:34:18.491Z [WARN] agent: Check is now critical: check=service:s2-sidecar-proxy:1
2021-02-24T12:34:22.533Z [DEBUG] agent: Check status updated: check=service:s1-sidecar-proxy:1 status=passing
2021-02-24T12:34:22.533Z [DEBUG] agent: Node info in sync
2021-02-24T12:34:22.533Z [DEBUG] agent: Service in sync: service=ingress-gateway-random-sampling-100
2021-02-24T12:34:22.533Z [DEBUG] agent: Service in sync: service=s1
2021-02-24T12:34:22.533Z [DEBUG] agent: Service in sync: service=s1-sidecar-proxy
2021-02-24T12:34:22.533Z [DEBUG] agent: Service in sync: service=s2
2021-02-24T12:34:22.533Z [DEBUG] agent: Service in sync: service=s2-sidecar-proxy
2021-02-24T12:34:22.534Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-0
2021-02-24T12:34:22.534Z [DEBUG] agent: Service in sync: service=ingress-gateway-client-sampling-100
2021-02-24T12:34:22.534Z [DEBUG] agent: Service in sync: service=ingress-gateway-random-sampling-0
2021-02-24T12:34:22.534Z [DEBUG] agent: Check in sync: check=service:s1-sidecar-proxy:2
2021-02-24T12:34:22.534Z [DEBUG] agent: Check in sync: check=service:s2-sidecar-proxy:1
2021-02-24T12:34:22.535Z [DEBUG] agent: Check in sync: check=service:s2-sidecar-proxy:2
2021-02-24T12:34:22.536Z [INFO] agent: Synced check: check=service:s1-sidecar-proxy:1
Envoy Logs for ingress-gateway-random-sampling-100
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:305] initializing epoch 0 (base id=0, hot restart version=disabled)
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:307] statically linked extensions:
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.dubbo_proxy.route_matchers: default
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.thrift_proxy.protocols: auto, binary, binary/non-strict, compact, twitter
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.upstreams: envoy.filters.connection_pools.http.generic, envoy.filters.connection_pools.http.http, envoy.filters.connection_pools.http.tcp
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.filters.network: envoy.client_ssl_auth, envoy.echo, envoy.ext_authz, envoy.filters.network.client_ssl_auth, envoy.filters.network.direct_response, envoy.filters.network.dubbo_proxy, envoy.filters.network.echo, envoy.filters.network.ext_authz, envoy.filters.network.http_connection_manager, envoy.filters.network.kafka_broker, envoy.filters.network.local_ratelimit, envoy.filters.network.mongo_proxy, envoy.filters.network.mysql_proxy, envoy.filters.network.postgres_proxy, envoy.filters.network.ratelimit, envoy.filters.network.rbac, envoy.filters.network.redis_proxy, envoy.filters.network.rocketmq_proxy, envoy.filters.network.sni_cluster, envoy.filters.network.sni_dynamic_forward_proxy, envoy.filters.network.tcp_proxy, envoy.filters.network.thrift_proxy, envoy.filters.network.zookeeper_proxy, envoy.http_connection_manager, envoy.mongo_proxy, envoy.ratelimit, envoy.redis_proxy, envoy.tcp_proxy
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.guarddog_actions: envoy.watchdog.abort_action, envoy.watchdog.profile_action
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.dubbo_proxy.protocols: dubbo
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.http.cache: envoy.extensions.http.cache.simple
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.retry_host_predicates: envoy.retry_host_predicates.omit_canary_hosts, envoy.retry_host_predicates.omit_host_metadata, envoy.retry_host_predicates.previous_hosts
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.stats_sinks: envoy.dog_statsd, envoy.metrics_service, envoy.stat_sinks.dog_statsd, envoy.stat_sinks.hystrix, envoy.stat_sinks.metrics_service, envoy.stat_sinks.statsd, envoy.statsd
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.transport_sockets.downstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, raw_buffer, tls
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.health_checkers: envoy.health_checkers.redis
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.dubbo_proxy.filters: envoy.filters.dubbo.router
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.dubbo_proxy.serializers: dubbo.hessian2
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.filters.http: envoy.buffer, envoy.cors, envoy.csrf, envoy.ext_authz, envoy.fault, envoy.filters.http.adaptive_concurrency, envoy.filters.http.admission_control, envoy.filters.http.aws_lambda, envoy.filters.http.aws_request_signing, envoy.filters.http.buffer, envoy.filters.http.cache, envoy.filters.http.cdn_loop, envoy.filters.http.compressor, envoy.filters.http.cors, envoy.filters.http.csrf, envoy.filters.http.decompressor, envoy.filters.http.dynamic_forward_proxy, envoy.filters.http.dynamo, envoy.filters.http.ext_authz, envoy.filters.http.fault, envoy.filters.http.grpc_http1_bridge, envoy.filters.http.grpc_http1_reverse_bridge, envoy.filters.http.grpc_json_transcoder, envoy.filters.http.grpc_stats, envoy.filters.http.grpc_web, envoy.filters.http.gzip, envoy.filters.http.header_to_metadata, envoy.filters.http.health_check, envoy.filters.http.ip_tagging, envoy.filters.http.jwt_authn, envoy.filters.http.local_ratelimit, envoy.filters.http.lua, envoy.filters.http.oauth, envoy.filters.http.on_demand, envoy.filters.http.original_src, envoy.filters.http.ratelimit, envoy.filters.http.rbac, envoy.filters.http.router, envoy.filters.http.squash, envoy.filters.http.tap, envoy.grpc_http1_bridge, envoy.grpc_json_transcoder, envoy.grpc_web, envoy.gzip, envoy.health_check, envoy.http_dynamo_filter, envoy.ip_tagging, envoy.local_rate_limit, envoy.lua, envoy.rate_limit, envoy.router, envoy.squash
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.quic_client_codec: quiche
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.thrift_proxy.filters: envoy.filters.thrift.rate_limit, envoy.filters.thrift.router
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.bootstrap: envoy.extensions.network.socket_interface.default_socket_interface
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.tracers: envoy.dynamic.ot, envoy.lightstep, envoy.tracers.datadog, envoy.tracers.dynamic_ot, envoy.tracers.lightstep, envoy.tracers.opencensus, envoy.tracers.xray, envoy.tracers.zipkin, envoy.zipkin
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.resolvers: envoy.ip
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.resource_monitors: envoy.resource_monitors.fixed_heap, envoy.resource_monitors.injected_resource
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.retry_priorities: envoy.retry_priorities.previous_priorities
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.udp_packet_writers: udp_default_writer, udp_gso_batch_writer
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.grpc_credentials: envoy.grpc_credentials.aws_iam, envoy.grpc_credentials.default, envoy.grpc_credentials.file_based_metadata
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.internal_redirect_predicates: envoy.internal_redirect_predicates.allow_listed_routes, envoy.internal_redirect_predicates.previous_routes, envoy.internal_redirect_predicates.safe_cross_scheme
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.clusters: envoy.cluster.eds, envoy.cluster.logical_dns, envoy.cluster.original_dst, envoy.cluster.static, envoy.cluster.strict_dns, envoy.clusters.aggregate, envoy.clusters.dynamic_forward_proxy, envoy.clusters.redis
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.filters.listener: envoy.filters.listener.http_inspector, envoy.filters.listener.original_dst, envoy.filters.listener.original_src, envoy.filters.listener.proxy_protocol, envoy.filters.listener.tls_inspector, envoy.listener.http_inspector, envoy.listener.original_dst, envoy.listener.original_src, envoy.listener.proxy_protocol, envoy.listener.tls_inspector
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.filters.udp_listener: envoy.filters.udp.dns_filter, envoy.filters.udp_listener.udp_proxy
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.transport_sockets.upstream: envoy.transport_sockets.alts, envoy.transport_sockets.quic, envoy.transport_sockets.raw_buffer, envoy.transport_sockets.tap, envoy.transport_sockets.tls, envoy.transport_sockets.upstream_proxy_protocol, raw_buffer, tls
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.udp_listeners: quiche_quic_listener, raw_udp_listener
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.compression.compressor: envoy.compression.gzip.compressor
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.compression.decompressor: envoy.compression.gzip.decompressor
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.access_loggers: envoy.access_loggers.file, envoy.access_loggers.http_grpc, envoy.access_loggers.tcp_grpc, envoy.file_access_log, envoy.http_grpc_access_log, envoy.tcp_grpc_access_log
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.thrift_proxy.transports: auto, framed, header, unframed
[2021-02-24 12:34:22.869][8][info][main] [source/server/server.cc:309] envoy.quic_server_codec: quiche
[2021-02-24 12:34:22.876][8][info][main] [source/server/server.cc:325] HTTP header map info:
[2021-02-24 12:34:22.877][8][warning][runtime] [source/common/runtime/runtime_features.cc:31] Unable to use runtime singleton for feature envoy.http.headermap.lazy_map_min_size
[2021-02-24 12:34:22.877][8][warning][runtime] [source/common/runtime/runtime_features.cc:31] Unable to use runtime singleton for feature envoy.http.headermap.lazy_map_min_size
[2021-02-24 12:34:22.877][8][warning][runtime] [source/common/runtime/runtime_features.cc:31] Unable to use runtime singleton for feature envoy.http.headermap.lazy_map_min_size
[2021-02-24 12:34:22.877][8][warning][runtime] [source/common/runtime/runtime_features.cc:31] Unable to use runtime singleton for feature envoy.http.headermap.lazy_map_min_size
[2021-02-24 12:34:22.877][8][info][main] [source/server/server.cc:328] request header map: 608 bytes: :authority,:method,:path,:protocol,:scheme,accept,accept-encoding,access-control-request-method,authorization,cache-control,cdn-loop,connection,content-encoding,content-length,content-type,expect,grpc-accept-encoding,grpc-timeout,if-match,if-modified-since,if-none-match,if-range,if-unmodified-since,keep-alive,origin,pragma,proxy-connection,referer,te,transfer-encoding,upgrade,user-agent,via,x-client-trace-id,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-downstream-service-cluster,x-envoy-downstream-service-node,x-envoy-expected-rq-timeout-ms,x-envoy-external-address,x-envoy-force-trace,x-envoy-hedge-on-per-try-timeout,x-envoy-internal,x-envoy-ip-tags,x-envoy-max-retries,x-envoy-original-path,x-envoy-original-url,x-envoy-retriable-header-names,x-envoy-retriable-status-codes,x-envoy-retry-grpc-on,x-envoy-retry-on,x-envoy-upstream-alt-stat-name,x-envoy-upstream-rq-per-try-timeout-ms,x-envoy-upstream-rq-timeout-alt-response,x-envoy-upstream-rq-timeout-ms,x-forwarded-client-cert,x-forwarded-for,x-forwarded-proto,x-ot-span-context,x-request-id
[2021-02-24 12:34:22.877][8][info][main] [source/server/server.cc:328] request trailer map: 128 bytes:
[2021-02-24 12:34:22.877][8][info][main] [source/server/server.cc:328] response header map: 424 bytes: :status,access-control-allow-credentials,access-control-allow-headers,access-control-allow-methods,access-control-allow-origin,access-control-expose-headers,access-control-max-age,age,cache-control,connection,content-encoding,content-length,content-type,date,etag,expires,grpc-message,grpc-status,keep-alive,last-modified,location,proxy-connection,server,transfer-encoding,upgrade,vary,via,x-envoy-attempt-count,x-envoy-decorator-operation,x-envoy-degraded,x-envoy-immediate-health-check-fail,x-envoy-ratelimited,x-envoy-upstream-canary,x-envoy-upstream-healthchecked-cluster,x-envoy-upstream-service-time,x-request-id
[2021-02-24 12:34:22.877][8][info][main] [source/server/server.cc:328] response trailer map: 152 bytes: grpc-message,grpc-status
[2021-02-24 12:34:22.880][8][debug][main] [source/server/overload_manager_impl.cc:264] No overload action is configured for envoy.overload_actions.shrink_heap.
[2021-02-24 12:34:22.881][8][debug][main] [source/server/overload_manager_impl.cc:264] No overload action is configured for envoy.overload_actions.stop_accepting_connections.
[2021-02-24 12:34:22.881][8][debug][main] [source/server/overload_manager_impl.cc:264] No overload action is configured for envoy.overload_actions.stop_accepting_connections.
[2021-02-24 12:34:22.881][8][info][main] [source/server/server.cc:448] admin address: 0.0.0.0:20001
[2021-02-24 12:34:22.882][8][info][main] [source/server/server.cc:583] runtime: layers:
- name: base
static_layer:
{}
- name: admin
admin_layer:
{}
[2021-02-24 12:34:22.883][8][info][config] [source/server/configuration_impl.cc:95] loading tracing configuration
[2021-02-24 12:34:22.883][8][info][config] [source/server/configuration_impl.cc:106] validating default server-wide tracing driver: envoy.tracers.zipkin
[2021-02-24 12:34:22.883][8][warning][misc] [source/common/protobuf/utility.cc:294] Configuration does not parse cleanly as v3. v2 configuration is deprecated and will be removed from Envoy at the start of Q1 2021: envoy.config.trace.v2.ZipkinConfig
[2021-02-24 12:34:22.883][8][info][config] [source/server/configuration_impl.cc:70] loading 0 static secret(s)
[2021-02-24 12:34:22.883][8][info][config] [source/server/configuration_impl.cc:76] loading 2 cluster(s)
[2021-02-24 12:34:22.885][12][debug][grpc] [source/common/grpc/google_async_client_impl.cc:49] completionThread running
[2021-02-24 12:34:22.939][8][debug][upstream] [source/common/upstream/upstream_impl.cc:286] transport socket match, socket default selected for host with address 127.0.0.1:8502
[2021-02-24 12:34:22.992][8][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1044] adding TLS initial cluster local_agent
[2021-02-24 12:34:22.992][8][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1044] adding TLS initial cluster zipkin
[2021-02-24 12:34:22.993][8][debug][upstream] [source/common/upstream/upstream_impl.cc:991] initializing Primary cluster local_agent completed
[2021-02-24 12:34:22.993][8][debug][init] [source/common/init/manager_impl.cc:49] init manager Cluster local_agent contains no targets
[2021-02-24 12:34:22.993][8][debug][init] [source/common/init/watcher_impl.cc:14] init manager Cluster local_agent initialized, notifying ClusterImplBase
[2021-02-24 12:34:22.993][8][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1198] membership update for TLS cluster local_agent added 1 removed 0
[2021-02-24 12:34:22.993][8][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:107] cm init: init complete: cluster=local_agent primary=0 secondary=0
[2021-02-24 12:34:22.993][8][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:127] maybe finish initialize state: 0
[2021-02-24 12:34:22.993][8][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:79] cm init: adding: cluster=local_agent primary=0 secondary=0
[2021-02-24 12:34:22.993][8][debug][upstream] [source/common/upstream/upstream_impl.cc:286] transport socket match, socket default selected for host with address 127.0.0.1:9411
[2021-02-24 12:34:22.993][8][debug][upstream] [source/common/upstream/strict_dns_cluster.cc:146] DNS hosts have changed for 127.0.0.1
[2021-02-24 12:34:22.993][8][debug][upstream] [source/common/upstream/strict_dns_cluster.cc:167] DNS refresh rate reset for 127.0.0.1, refresh rate 5000 ms
[2021-02-24 12:34:22.993][8][debug][upstream] [source/common/upstream/upstream_impl.cc:991] initializing Primary cluster zipkin completed
[2021-02-24 12:34:22.993][8][debug][init] [source/common/init/manager_impl.cc:49] init manager Cluster zipkin contains no targets
[2021-02-24 12:34:22.993][8][debug][init] [source/common/init/watcher_impl.cc:14] init manager Cluster zipkin initialized, notifying ClusterImplBase
[2021-02-24 12:34:22.993][8][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1198] membership update for TLS cluster zipkin added 1 removed 0
[2021-02-24 12:34:22.994][8][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:107] cm init: init complete: cluster=zipkin primary=0 secondary=0
[2021-02-24 12:34:22.994][8][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:127] maybe finish initialize state: 0
[2021-02-24 12:34:22.994][8][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:79] cm init: adding: cluster=zipkin primary=0 secondary=0
[2021-02-24 12:34:22.994][8][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:127] maybe finish initialize state: 1
[2021-02-24 12:34:22.994][8][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:136] maybe finish initialize primary init clusters empty: true
[2021-02-24 12:34:22.994][8][debug][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:56] Establishing new gRPC bidi stream for rpc StreamAggregatedResources(stream .envoy.api.v2.DiscoveryRequest) returns (stream .envoy.api.v2.DiscoveryResponse);
[2021-02-24 12:34:22.994][8][debug][router] [source/common/router/router.cc:429] [C0][S15839263386520151811] cluster 'local_agent' match for URL '/envoy.service.discovery.v2.AggregatedDiscoveryService/StreamAggregatedResources'
[2021-02-24 12:34:22.994][8][debug][router] [source/common/router/router.cc:586] [C0][S15839263386520151811] router decoding headers:
':method', 'POST'
':path', '/envoy.service.discovery.v2.AggregatedDiscoveryService/StreamAggregatedResources'
':authority', 'local_agent'
':scheme', 'http'
'te', 'trailers'
'content-type', 'application/grpc'
'x-consul-token', ''
'x-envoy-internal', 'true'
'x-forwarded-for', '172.26.0.2'
[2021-02-24 12:34:22.994][8][debug][pool] [source/common/http/conn_pool_base.cc:71] queueing stream due to no available connections
[2021-02-24 12:34:22.994][8][debug][pool] [source/common/conn_pool/conn_pool_base.cc:104] creating a new connection
[2021-02-24 12:34:22.995][8][debug][client] [source/common/http/codec_client.cc:39] [C0] connecting
[2021-02-24 12:34:22.995][8][debug][connection] [source/common/network/connection_impl.cc:769] [C0] connecting to 127.0.0.1:8502
[2021-02-24 12:34:22.995][8][debug][connection] [source/common/network/connection_impl.cc:785] [C0] connection in progress
[2021-02-24 12:34:23.004][8][debug][http2] [source/common/http/http2/codec_impl.cc:1173] [C0] updating connection-level initial window size to 268435456
[2021-02-24 12:34:23.004][8][info][config] [source/server/configuration_impl.cc:80] loading 0 listener(s)
[2021-02-24 12:34:23.004][8][info][config] [source/server/configuration_impl.cc:121] loading stats sink configuration
[2021-02-24 12:34:23.005][8][debug][init] [source/common/init/manager_impl.cc:24] added target LDS to init manager Server
[2021-02-24 12:34:23.005][8][debug][init] [source/common/init/manager_impl.cc:49] init manager RTDS contains no targets
[2021-02-24 12:34:23.005][8][debug][init] [source/common/init/watcher_impl.cc:14] init manager RTDS initialized, notifying RTDS
[2021-02-24 12:34:23.005][8][info][runtime] [source/common/runtime/runtime_impl.cc:421] RTDS has finished initialization
[2021-02-24 12:34:23.005][8][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:196] continue initializing secondary clusters
[2021-02-24 12:34:23.005][8][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:127] maybe finish initialize state: 2
[2021-02-24 12:34:23.005][8][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:136] maybe finish initialize primary init clusters empty: true
[2021-02-24 12:34:23.005][8][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:151] maybe finish initialize secondary init clusters empty: true
[2021-02-24 12:34:23.005][8][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:172] maybe finish initialize cds api ready: true
[2021-02-24 12:34:23.005][8][info][upstream] [source/common/upstream/cluster_manager_impl.cc:174] cm init: initializing cds
[2021-02-24 12:34:23.005][8][debug][config] [source/common/config/grpc_mux_impl.cc:70] gRPC mux addWatch for type.googleapis.com/envoy.api.v2.Cluster
[2021-02-24 12:34:23.006][8][warning][main] [source/server/server.cc:565] there is no configured limit to the number of allowed active connections. Set a limit via the runtime key overload.global_downstream_max_connections
[2021-02-24 12:34:23.007][8][info][main] [source/server/server.cc:679] starting main dispatch loop
[2021-02-24 12:34:23.007][8][debug][connection] [source/common/network/connection_impl.cc:625] [C0] connected
[2021-02-24 12:34:23.007][8][debug][client] [source/common/http/codec_client.cc:77] [C0] connected
[2021-02-24 12:34:23.008][8][debug][pool] [source/common/conn_pool/conn_pool_base.cc:205] [C0] attaching to next stream
[2021-02-24 12:34:23.008][8][debug][pool] [source/common/conn_pool/conn_pool_base.cc:126] [C0] creating stream
[2021-02-24 12:34:23.008][8][debug][router] [source/common/router/upstream_request.cc:357] [C0][S15839263386520151811] pool ready
[2021-02-24 12:34:23.009][8][debug][router] [source/common/router/router.cc:1178] [C0][S15839263386520151811] upstream headers complete: end_stream=false
[2021-02-24 12:34:23.012][8][debug][http] [source/common/http/async_client_impl.cc:100] async http request response headers (end_stream=false):
':status', '200'
'content-type', 'application/grpc'
[2021-02-24 12:34:23.012][8][debug][config] [source/common/config/grpc_mux_impl.cc:139] Received gRPC message for type.googleapis.com/envoy.api.v2.Cluster at version 00000001
[2021-02-24 12:34:23.012][8][debug][config] [source/common/config/grpc_mux_impl.cc:103] Pausing discovery requests for type.googleapis.com/envoy.api.v2.Cluster (previous count 0)
[2021-02-24 12:34:23.012][8][debug][config] [source/common/config/grpc_mux_impl.cc:103] Pausing discovery requests for type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment (previous count 0)
[2021-02-24 12:34:23.012][8][debug][config] [source/common/config/grpc_mux_impl.cc:103] Pausing discovery requests for type.googleapis.com/envoy.api.v2.ClusterLoadAssignment (previous count 0)
[2021-02-24 12:34:23.012][8][info][upstream] [source/common/upstream/cds_api_impl.cc:64] cds: add 0 cluster(s), remove 2 cluster(s)
[2021-02-24 12:34:23.012][8][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:127] maybe finish initialize state: 4
[2021-02-24 12:34:23.012][8][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:136] maybe finish initialize primary init clusters empty: true
[2021-02-24 12:34:23.012][8][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:151] maybe finish initialize secondary init clusters empty: true
[2021-02-24 12:34:23.012][8][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:172] maybe finish initialize cds api ready: true
[2021-02-24 12:34:23.013][8][info][upstream] [source/common/upstream/cluster_manager_impl.cc:178] cm init: all clusters initialized
[2021-02-24 12:34:23.013][8][debug][config] [source/common/config/grpc_mux_impl.cc:103] Pausing discovery requests for type.googleapis.com/envoy.config.route.v3.RouteConfiguration (previous count 0)
[2021-02-24 12:34:23.013][8][debug][config] [source/common/config/grpc_mux_impl.cc:103] Pausing discovery requests for type.googleapis.com/envoy.api.v2.RouteConfiguration (previous count 0)
[2021-02-24 12:34:23.013][8][info][main] [source/server/server.cc:660] all clusters initialized. initializing init manager
[2021-02-24 12:34:23.013][8][debug][init] [source/common/init/manager_impl.cc:53] init manager Server initializing
[2021-02-24 12:34:23.013][8][debug][init] [source/common/init/target_impl.cc:15] init manager Server initializing target LDS
[2021-02-24 12:34:23.013][8][debug][config] [source/common/config/grpc_mux_impl.cc:70] gRPC mux addWatch for type.googleapis.com/envoy.api.v2.Listener
[2021-02-24 12:34:23.013][8][debug][config] [source/common/config/grpc_mux_impl.cc:110] Resuming discovery requests for type.googleapis.com/envoy.config.route.v3.RouteConfiguration (previous count 1)
[2021-02-24 12:34:23.014][8][debug][config] [source/common/config/grpc_mux_impl.cc:110] Resuming discovery requests for type.googleapis.com/envoy.api.v2.RouteConfiguration (previous count 1)
[2021-02-24 12:34:23.014][8][debug][config] [source/common/config/grpc_mux_impl.cc:110] Resuming discovery requests for type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment (previous count 1)
[2021-02-24 12:34:23.014][8][debug][config] [source/common/config/grpc_mux_impl.cc:110] Resuming discovery requests for type.googleapis.com/envoy.api.v2.ClusterLoadAssignment (previous count 1)
[2021-02-24 12:34:23.014][8][debug][config] [source/common/config/grpc_subscription_impl.cc:73] gRPC config for type.googleapis.com/envoy.api.v2.Cluster accepted with 0 resources with version 00000001
[2021-02-24 12:34:23.014][8][debug][config] [source/common/config/grpc_mux_impl.cc:110] Resuming discovery requests for type.googleapis.com/envoy.api.v2.Cluster (previous count 1)
[2021-02-24 12:34:23.015][8][debug][config] [source/common/config/grpc_mux_impl.cc:139] Received gRPC message for type.googleapis.com/envoy.api.v2.Listener at version 00000001
[2021-02-24 12:34:23.015][8][debug][config] [source/common/config/grpc_mux_impl.cc:103] Pausing discovery requests for type.googleapis.com/envoy.api.v2.Listener (previous count 0)
[2021-02-24 12:34:23.015][8][debug][config] [source/common/config/grpc_mux_impl.cc:103] Pausing discovery requests for type.googleapis.com/envoy.config.route.v3.RouteConfiguration (previous count 0)
[2021-02-24 12:34:23.015][8][debug][config] [source/common/config/grpc_mux_impl.cc:103] Pausing discovery requests for type.googleapis.com/envoy.api.v2.RouteConfiguration (previous count 0)
[2021-02-24 12:34:23.015][8][debug][init] [source/common/init/watcher_impl.cc:14] target LDS initialized, notifying init manager Server
[2021-02-24 12:34:23.015][8][debug][init] [source/common/init/watcher_impl.cc:14] init manager Server initialized, notifying RunHelper
[2021-02-24 12:34:23.015][8][info][config] [source/server/listener_manager_impl.cc:888] all dependencies initialized. starting workers
[2021-02-24 12:34:23.015][8][debug][config] [source/server/listener_manager_impl.cc:899] starting worker 0
[2021-02-24 12:34:23.015][8][debug][config] [source/server/listener_manager_impl.cc:899] starting worker 1
[2021-02-24 12:34:23.015][15][debug][main] [source/server/worker_impl.cc:124] worker entering dispatch loop
[2021-02-24 12:34:23.016][8][debug][config] [source/common/config/grpc_mux_impl.cc:110] Resuming discovery requests for type.googleapis.com/envoy.config.route.v3.RouteConfiguration (previous count 1)
[2021-02-24 12:34:23.016][8][debug][config] [source/common/config/grpc_mux_impl.cc:110] Resuming discovery requests for type.googleapis.com/envoy.api.v2.RouteConfiguration (previous count 1)
[2021-02-24 12:34:23.016][8][debug][config] [source/common/config/grpc_subscription_impl.cc:73] gRPC config for type.googleapis.com/envoy.api.v2.Listener accepted with 0 resources with version 00000001
[2021-02-24 12:34:23.016][8][debug][config] [source/common/config/grpc_mux_impl.cc:110] Resuming discovery requests for type.googleapis.com/envoy.api.v2.Listener (previous count 1)
[2021-02-24 12:34:23.016][15][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1044] adding TLS initial cluster local_agent
[2021-02-24 12:34:23.017][16][debug][main] [source/server/worker_impl.cc:124] worker entering dispatch loop
[2021-02-24 12:34:23.017][16][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1044] adding TLS initial cluster local_agent
[2021-02-24 12:34:23.017][16][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1044] adding TLS initial cluster zipkin
[2021-02-24 12:34:23.017][16][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1198] membership update for TLS cluster local_agent added 1 removed 0
[2021-02-24 12:34:23.017][16][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1198] membership update for TLS cluster zipkin added 1 removed 0
[2021-02-24 12:34:23.017][15][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1044] adding TLS initial cluster zipkin
[2021-02-24 12:34:23.018][17][debug][grpc] [source/common/grpc/google_async_client_impl.cc:49] completionThread running
[2021-02-24 12:34:23.018][15][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1198] membership update for TLS cluster local_agent added 1 removed 0
[2021-02-24 12:34:23.018][18][debug][grpc] [source/common/grpc/google_async_client_impl.cc:49] completionThread running
[2021-02-24 12:34:23.018][15][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:1198] membership update for TLS cluster zipkin added 1 removed 0
[2021-02-24 12:34:23.429][8][debug][conn_handler] [source/server/connection_handler_impl.cc:476] [C1] new connection
[2021-02-24 12:34:23.430][8][debug][http] [source/common/http/conn_manager_impl.cc:225] [C1] new stream
[2021-02-24 12:34:23.430][8][debug][http] [source/common/http/conn_manager_impl.cc:837] [C1][S6029727585338401189] request headers complete (end_stream=true):
':authority', 'localhost:20001'
':path', '/stats'
':method', 'GET'
'user-agent', 'curl/7.61.1'
'accept', '*/*'
[2021-02-24 12:34:23.430][8][debug][http] [source/common/http/filter_manager.cc:721] [C1][S6029727585338401189] request end stream
[2021-02-24 12:34:23.430][8][debug][admin] [source/server/admin/admin_filter.cc:66] [C1][S6029727585338401189] request complete: path: /stats
[2021-02-24 12:34:23.431][8][debug][http] [source/common/http/conn_manager_impl.cc:1435] [C1][S6029727585338401189] encoding headers via codec (end_stream=false):
':status', '200'
'content-type', 'text/plain; charset=UTF-8'
'cache-control', 'no-cache, max-age=0'
'x-content-type-options', 'nosniff'
'date', 'Wed, 24 Feb 2021 12:34:23 GMT'
'server', 'envoy'
[2021-02-24 12:34:23.431][8][debug][connection] [source/common/network/connection_impl.cc:593] [C1] remote close
[2021-02-24 12:34:23.431][8][debug][connection] [source/common/network/connection_impl.cc:203] [C1] closing socket: 0
[2021-02-24 12:34:23.432][8][debug][conn_handler] [source/server/connection_handler_impl.cc:152] [C1] adding to cleanup list
Hi @freddygv, Just wondering if this PR is still on your radar? Would love to be able to configure tracing to start at the Ingress Gateway since without that we're essentially missing 2/3 hops, [LB->]IG->SP, in the mesh before our traces even begin.
Hello. Sorry this was left to linger so long. I'll try to get a look at the integration test issue this week.
Comment in Draft status:
@eculver any insight on the status of this PR?
Context: We're working on a Rust tracing blog post and have in mind a post on distributed tracing with Jaeger + Consol (in a Rust context, actually we plan to break out the initial post server and client, from one file, into different containers/VM's).
Not too dissimilar from this post on how Istio uses Jaeger and Zipkin
Thanks for following up @bbros-dev. I've been discussing a bit with @markan and other core team members about getting this prioritized and I'll just say that it's definitely on our radar and I will be spending some time in the next few weeks to try to get this unblocked. I know that doesn't solve anything for now, but you do have the attention of the maintainers and we're all excited to get this over the line.
My plan is to rebase the change and then write the integration tests. The challenge is that our integration tests for this code has changed a bit since this was created and is rather complex to setup. I'm eager to help though and will keep you posted.
Could I push changes to your remote? I haven't tried yet, but I think you have to have it turned on in your fork's settings. If not, I can work elsewhere but it will streamline things a bit.
Hi @eculver, thanks for prioritising this. This is my PR and you should be able to push to my fork. Just let me know if you cant and I'll give you the access explicitly.
Thanks @eculver, we'll hold off until this lands. @Gufran is the guru on this PR. Many thanks for you efforts @Gufran !
Hey friends, sorry for the delayed response here. I originally picked this up thinking that the work was going to be primarily supporting our integration test suite, but it quickly grew in context to the nuances of how tracing behaves in Consul in the context of ingress gateways. For example, outside of this change, requests through an ingress gateway will not be traced unless the x-client-trace-id: 1 header is set (see #6645). It's nuances like this we need to be aware of when making changes to the way tracing works with ingress gateways, and so I need to make sure we have everything straight before we can enable this support.
I'm still planning to get this PR updated to work with our recent integration test suite changes, but we will need to follow up with regards to documentation to make sure everything lines up. I hope this makes sense. Please feel free to ask if it doesn't.
Huzzah! I was able to rebase this and push to the existing remote (thanks @Gufran). I'm still working on fixing some tests but wanted to report some progress.
Is this feature still coming? that would put consul on next level at the company I work for.
Is this feature still coming? that would put consul on next level at the company I work for.
It is! I'm so sorry for the delay, but we should have it merged soon. I'm not sure if it will make it into the next major release, but it will at least be in Consul 1.15.
Is this feature still coming? that would put consul on next level at the company I work for.
It is! I'm so sorry for the delay, but we should have it merged soon. I'm not sure if it will make it into the next major release, but it will at least be in Consul 1.15.
@eculver Thank you for all the effort.
All right, I have updated all the integration tests to be working now, but I do have a few questions on whether they are actually correct. If we can get some confirmation on whether this is the correct behavior, I think we can get it merged.
Its been a while since I last dove into consul codebase, but I can go through the change-set this weekend if it helps
Its been a while since I last dove into consul codebase, but I can go through the change-set this weekend if it helps
Thanks @Gufran! The only question I had was around the expected behavior that I mentioned above which you answered. I am now just working on getting final sign off, so I think we're good! We'll update here as we go.
Update: I think this is in a pretty good state. We will wait for feedback from @freddygv, but otherwise I think this is good to go.
@eculver is attempting to deploy a commit to the HashiCorp Team on Vercel.
A member of the Team first needs to authorize it.
Hi,
I found this thread when digging for more information about options for triggering tracing on Consul's ingress gateway - or actually more specifically "api-gateway" now, since "ingress-gateway" has been deprecated.
"Api-gateway" seems to me behaving the same as "ingress-gateway" in consul - with RandomSampling set by Consul to 0 (opposite to Envoy's default, from what I've seen) - effectively disabling traces from ingress, unless requests are coming with "x-client-race-id" header set. Unfortunately we're not currently in a good position to get the required header provided for traffic entering our Consul's "api-gateways", in environment where I use Consul Connect.
Does anyone know maybe - especially from Hashicorp/Consul developers end - if there are any plans finally to get that RandomSampling configurable somehow in Consul's configs for gateways with a dedicated option (without a need to getting some custom proxy defaults, which as a bit unwanted side-effect would affect also sidecar proxies)? I see this PR is actually about 3 years old now and kind of stuck still in "Open" state - with last entries from around December 2022 - so wondering if it's been completely abandoned, with lack of interest in having that property configurable? Or are there maybe any chances after all in new versions of Consul, with "api-gateway". I guess (and hope) that I'm not the only one interested in having tracing a bit better configurable in Consul Connect after all these years.
Or is my only relatively easy option for now also building my own, customized Consul binary, with RandomSampling set to 100 to allow all traces everywhere, as we need it?
Out of curiosity - maybe someone could anyway also tell me - what was the reason behind this change in Consul Connect with Envoy as proxy - reverting those defaults from Envoy's 100% samples to Consul's 0?
Kind regards
This pull request has been automatically flagged for inactivity because it has not been acted upon in the last 60 days. It will be closed if no new activity occurs in the next 30 days. Please feel free to re-open to resurrect the change if you feel this has happened by mistake. Thank you for your contributions.
Closing due to inactivity. If you feel this was a mistake or you wish to re-open at any time in the future, please leave a comment and it will be re-surfaced for the maintainers to review.