Verify Service Map for connectivity check
I'm trying hubble-ui 0.7.5 with connectivity-check and here is how it's looking on a service map:
I'm looking for an expert opinion to verify if this looks correct or not and if not - what's missing.
The intended destinations for most of these boxes is embedded into the name of the source pod, such as pod-to-a means it communicates with echo-a, pod-to-external-fqdn talks to an FQDN outside the cluster, etc.
Unexpected observations:
-
echo-bis rendered outside the namespace. echo-b is a regular pod in the namespace - Some pods like
pod-to-aappear to be reaching out to the world, but they should only reach out to the echo-a pod via name. Maybe this is a race condition related to deployment? Does this persist over time? - I was at first confused by the lower line coming out of
pod-to-external-1111but after closer inspection I realise that is actually the line coming frompod-to-b-multi-node-cluster...so it's fine, just weird grid positioning.
echo-b is rendered outside the namespace. echo-b is a regular pod in the namespace
The reason for that is likely this:
ports:
- containerPort: 8080
hostPort: 40000
This means that there is connectivity to a host IP that will end up in echo-b. It's still incorrect though. It's definitely a corner case as almost nobody is using hostPort. The connectivity check is using it to test pod to host connectivity.
Some pods like pod-to-a appear to be reaching out to the world, but they should only reach out to the echo-a pod via name. Maybe this is a race condition related to deployment? Does this persist over time?
I'm assuming this is traffic to k8s worker nodes. Depending on whether remote node identities are enabled, it will show up as world.