linkerd2 icon indicating copy to clipboard operation
linkerd2 copied to clipboard

Linkerd is not proxing postgres pod

Open yogenderPalChandra opened this issue 8 months ago • 8 comments

What is the issue?

Hello, I have a postgres pod managed by statefulset:

`kubectl apply -f <filename below> -n dblin

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: new-postgres
  namespace: dblin
spec:
  selector:
    matchLabels:
      app: new-postgres
  serviceName: new-mydb-service
  replicas: 1
  template:
    metadata:
      labels:
        app: new-postgres
      annotations:
        linkerd.io/inject: enabled
        config.linkerd.io/opaque-ports: "5432"
    spec:
      containers:
      - name: postgres
        image: postgres:15
        ports:
        - containerPort: 5432
          name: postgres
        env:
        - name: POSTGRES_USER
          value: postgres
        - name: POSTGRES_PASSWORD
          value: postgres
        - name: POSTGRES_DB
          value: mydb
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"
        volumeMounts:
        - name: data
          mountPath: /var/lib/postgresql/data
      volumes:
      - name: data
        emptyDir: {}`

Then I have a dummy debug pod:

kubectl apply  -f <filename below> -n dblin

`apiVersion: v1
kind: Pod
metadata:
  name: debug
  namespace: dblin
  labels:
    app: debug
  annotations:
    linkerd.io/inject: enabled
    config.linkerd.io/opaque-ports: "5432"
spec:
  containers:
  - name: psql
    image: postgres:15
    command: ["sleep", "infinity"]
    env:
    - name: PGUSER
      value: postgres
    - name: PGPASSWORD
      value: postgres
    - name: PGDATABASE
      value: mydb


Then I query new-postgres pod from this debug pod:

#Create a table in pod IP:10.244.0.64 which is new-postgres-0 pod

kubectl exec -it debug -n dblin -c psql -- psql -h 10.244.0.64 -p 5432 -U postgres -d mydb -c "CREATE TABLE sensors (sensor_id SERIAL PRIMARY KEY, timestamp TIMESTAMP, value FLOAT);"
#Insert in table
kubectl exec -it debug -n dblin -c psql -- psql -h 10.244.0.64 -p 5432 -U postgres -d mydb -c "INSERT INTO sensors (timestamp, value) VALUES (NOW(), 42.0);"

#More is less
 kubectl exec -it debug -n dblin -c psql -- psql -h new-mydb-service -p 5432 -U postgres -d mydb -c "INSERT INTO sensors (timestamp, value) VALUES (NOW(), 43.5);"

# Query what ihas been insterted
kubectl exec -it debug -n dblin -c psql -- psql -h 10.244.0.64 -p 5432 -U postgres -d mydb -c "SELECT * FROM sensors;"

#output:
 sensor_id |         timestamp          | value 
-----------+----------------------------+-------
         1 | 2025-04-12 16:21:07.036654 |    42
         2 | 2025-04-12 16:29:35.457788 |  43.5
         3 | 2025-04-12 16:29:41.444684 |  43.4
         4 | 2025-04-12 16:29:44.123223 |  43.5
         5 | 2025-04-12 16:29:47.682535 |  43.7
         6 | 2025-04-12 16:29:50.53263  |    43
         7 | 2025-04-12 16:29:53.70951  |    45
         8 | 2025-04-12 16:29:56.069865 |  45.6
(8 rows)

Now this must have been proxied via linkerd-proxy sidecar container in the new-postgres-0 pod, but there are no logs:

kubectl logs new-postgres-0 -n dblin -c linkerd-proxy
[     0.001305s]  INFO ThreadId(01) linkerd2_proxy: release 2.292.0 (6426c38) by linkerd on 2025-04-09T19:44:34Z
[     0.002631s]  INFO ThreadId(01) linkerd2_proxy::rt: Using single-threaded proxy runtime
[     0.003096s]  INFO ThreadId(01) linkerd2_proxy: Admin interface on 0.0.0.0:4191
[     0.003101s]  INFO ThreadId(01) linkerd2_proxy: Inbound interface on 0.0.0.0:4143
[     0.003102s]  INFO ThreadId(01) linkerd2_proxy: Outbound interface on 127.0.0.1:4140
[     0.003102s]  INFO ThreadId(01) linkerd2_proxy: Tap interface on 0.0.0.0:4190
[     0.003103s]  INFO ThreadId(01) linkerd2_proxy: SNI is default.dblin.serviceaccount.identity.linkerd.cluster.local
[     0.003104s]  INFO ThreadId(01) linkerd2_proxy: Local identity is default.dblin.serviceaccount.identity.linkerd.cluster.local
[     0.003105s]  INFO ThreadId(01) linkerd2_proxy: Destinations resolved via linkerd-dst-headless.linkerd.svc.cluster.local:8086 (linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local)
[     0.003675s]  INFO ThreadId(01) dst:controller{addr=linkerd-dst-headless.linkerd.svc.cluster.local:8086}: linkerd_pool_p2c: Adding endpoint addr=10.244.0.18:8086
[     0.003736s]  INFO ThreadId(02) identity:identity{server.addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}:controller{addr=linkerd-identity-headless.linkerd.svc.cluster.local:8080}: linkerd_pool_p2c: Adding endpoint addr=10.244.0.17:8080
[     0.003762s]  INFO ThreadId(01) policy:controller{addr=linkerd-policy.linkerd.svc.cluster.local:8090}: linkerd_pool_p2c: Adding endpoint addr=10.244.0.18:8090
[     0.010733s]  INFO ThreadId(02) daemon:identity: linkerd_app: Certified identity id=default.dblin.serviceaccount.identity.linkerd.cluster.local

Could you please take a look?

How can it be reproduced?

I have provided code to reproduce it

Logs, error output, etc

Logs are given also

output of linkerd check -o short

inkerd check -o short Status check results are √

Environment

kind v0.27.0 go1.23.6 linux/amd64 Local KIND installation linkerd version: Client version: edge-25.4.2 Server version: edge-25.4.2

Possible solution

No response

Additional context

No response

Would you like to work on fixing this bug?

None

yogenderPalChandra avatar Apr 12 '25 16:04 yogenderPalChandra

hi @yogenderPalChandra, thanks for filing an issue, and thank you for including detailed information about how to reproduce what you are seeing.

to clarify, i believe that you are asking about why you don't see any logs. you might want to review the documentation regarding the proxy log level:

Emitting logs is an expensive operation for a network proxy, and by default, the Linkerd data plane proxies are configured to only log exceptional events. However, sometimes it is useful to increase the verbosity of proxy logs to assist with diagnosing proxy behavior.

it sounds like you are interested in seeing more detailed logs that include events for each connection. i might, following the syntax described here, set the config.linkerd.io/proxy-log-level annotation to a more open log-level such as debug.

cratelyn avatar Apr 14 '25 14:04 cratelyn

i've removed the bug label because i do not believe that this is a bug with the proxy.

cratelyn avatar Apr 14 '25 14:04 cratelyn

Hello @cratelyn Not really. I am not interested in logs. The code i provided is to reproduce the problem. The real issue:

#No postgerSQL in the edges:
linkerd viz edges deploy -n dblin
SRC          DST         SRC_NS        DST_NS   SECURED          
prometheus   flask-app   linkerd-viz   dblin    √  

I have a front end managed by deployment querrying the postgreSQL managed by sts. I want to track traffic from frontend, eg: RPS, number of faield requests etc. But since the traffic is somehowi bypassing the proxy it is not tracked. Meaning, from front end traffic hits the postgres pod but bypasses the proxy container. To debugg and simulate I removed the frint end and started querrying the postgres via debug pod which you see in the yaml defiantion above. Pleasee Images also. Image 1: all pods are meshed. Image postgres-0 pod meshed but shows no connectivty from front end.

Image

Image

yogenderPalChandra avatar Apr 14 '25 15:04 yogenderPalChandra

i used slow-cooker and terminus to establish a server and client, using these yaml definitions:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: slow-cooker
spec:
  replicas: 1
  selector:
    matchLabels:
      app: slow-cooker
  template:
    metadata:
      labels:
        app: slow-cooker
    spec:
      containers:
      - name: slow-cooker
        image: buoyantio/slow_cooker:1.3.0
        command:
        - "/bin/sh"
        args:
        - "-c"
        - |
          sleep 15 # wait for pods to start
          /slow_cooker/slow_cooker -metric-addr 0.0.0.0:9999 http://___TERMINUS_POD_IP___:8080
        ports:
        - containerPort: 9999
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: terminus
spec:
  replicas: 1
  selector:
    matchLabels:
      app: terminus
  template:
    metadata:
      labels:
        app: terminus
    spec:
      containers:
      - name: terminus
        image: buoyantio/bb:v0.0.6
        args:
        - terminus
        - "--h1-server-port=8080"
        - "--response-text=pong"
        ports:
        - containerPort: 8080

i used bin/linkerd inject --opaque-ports 8080 to run this with and without opaque ports. using the instructions linked above, i set the proxy's log level to include debug logs. this shows me logs indicating that traffic is going through the proxy.

i additionally confirmed this by examining the proxy metrics that track tcp bytes read in the server deployment. with and without an opaque port configured, this metric also indicates that traffic is going through the proxy.

you should be able to see the same metrics below where you cropped your images of dashboard, above. i observed these metrics when i ran the dashboard myself.

it does seem like the viz dashboard isn't showing the edges properly. the good news though, is that this is an issue with the viz extension, rather than an issue with traffic no longer being routed through the proxy.

Image

cratelyn avatar Apr 14 '25 18:04 cratelyn

@yogenderPalChandra Can you include the output of linkerd diagnostics proxy-metrics ... | grep 5432 against the client and server pods? The metrics dumps will more reliably indicate usage than the CLI/UI. Alternatively, the pod annotation config.linkerd.io/proxy-log-level: linkerd=debug,info will give a more fulsome view of the proxy's behavior.

olix0r avatar Apr 15 '25 15:04 olix0r

Hello @olix0r seems traffic do reach proxy on postgres pod, but why its not shown in the edge:

linkerd diagnostics proxy-metrics pod/postgres-0 -n dblin | grep 5432
inbound_tcp_authz_allow_total{target_addr="10.244.0.11:5432",target_ip="10.244.0.11",target_port="5432",srv_group="",srv_kind="default",srv_name="all-unauthenticated",srv_port="5432",authz_group="",authz_kind="default",authz_name="all-unauthenticated",tls="true",client_id="default.dblin.serviceaccount.identity.linkerd.cluster.local"} 3
inbound_tcp_transport_header_connections_total{session_protocol="",target_port="5432",target_name="",client_id="default.dblin.serviceaccount.identity.linkerd.cluster.local"} 3


linkerd diagnostics proxy-metrics pod/flask-app-d76d9dc58-d66cp -n dblin | grep 5432
outbound_tcp_protocol_connections_total{protocol="opaq",parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name=""} 1
outbound_tcp_balancer_endpoints{endpoint_state="ready",parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 0
outbound_tcp_balancer_endpoints{endpoint_state="pending",parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 0
outbound_tcp_balancer_p2c_endpoints{parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 0
outbound_tcp_balancer_p2c_updates_total{op="Remove",parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 0
outbound_tcp_balancer_p2c_updates_total{op="Reset",parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 1
outbound_tcp_balancer_p2c_updates_total{op="Add",parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 0
outbound_tcp_balancer_queue_length{parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 0
outbound_tcp_balancer_queue_requests_total{parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 1
outbound_tcp_balancer_queue_latency_seconds_sum{parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 0.002018751
outbound_tcp_balancer_queue_latency_seconds_count{parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 1
outbound_tcp_balancer_queue_latency_seconds_bucket{le="0.0005",parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 0
outbound_tcp_balancer_queue_latency_seconds_bucket{le="0.005",parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 1
outbound_tcp_balancer_queue_latency_seconds_bucket{le="0.05",parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 1
outbound_tcp_balancer_queue_latency_seconds_bucket{le="0.5",parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 1
outbound_tcp_balancer_queue_latency_seconds_bucket{le="1.0",parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 1
outbound_tcp_balancer_queue_latency_seconds_bucket{le="3.0",parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 1
outbound_tcp_balancer_queue_latency_seconds_bucket{le="+Inf",parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 1
outbound_tcp_balancer_queue_gate_open_total{parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 1
outbound_tcp_balancer_queue_gate_shut_total{parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 0
outbound_tcp_balancer_queue_gate_open_time_seconds{parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 1744790849.3807855
outbound_tcp_balancer_queue_gate_shut_time_seconds{parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 0.0
outbound_tcp_balancer_queue_gate_timeout_seconds{parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",backend_group="core",backend_kind="Service",backend_namespace="dblin",backend_name="mydb-service",backend_port="5432",backend_section_name="",logical="mydb-service.dblin.svc.cluster.local:5432",concrete="mydb-service.dblin.svc.cluster.local:5432"} 3.0
outbound_tcp_route_open_total{parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",route_group="",route_kind="default",route_namespace="",route_name="opaq",target_ip="",target_port="5432"} 1
outbound_tcp_route_close_total{parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",route_group="",route_kind="default",route_namespace="",route_name="opaq",target_ip="",target_port="5432",error=""} 1
outbound_tcp_route_close_total{parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",route_group="",route_kind="default",route_namespace="",route_name="opaq",target_ip="",target_port="5432",error="unexpected"} 0
outbound_tcp_route_close_total{parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",route_group="",route_kind="default",route_namespace="",route_name="opaq",target_ip="",target_port="5432",error="invalid_backend"} 0
outbound_tcp_route_close_total{parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",route_group="",route_kind="default",route_namespace="",route_name="opaq",target_ip="",target_port="5432",error="forbidden"} 0
outbound_tcp_route_close_total{parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",route_group="",route_kind="default",route_namespace="",route_name="opaq",target_ip="",target_port="5432",error="invalid_policy"} 0

@cratelyn here is the lower portion of the image, still not showing the postgres as edges:

Image

The problem still remains, whilek HTTP traffic is tracked, TCP traffic is not tracked somehow. Thank you,

yogenderPalChandra avatar Apr 16 '25 17:04 yogenderPalChandra

The edges view only includes currently-open connections. Given that the connection was open and closed, it won't be included in the viz tooling:

outbound_tcp_route_open_total{parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",route_group="",route_kind="default",route_namespace="",route_name="opaq",target_ip="",target_port="5432"} 1
outbound_tcp_route_close_total{parent_group="core",parent_kind="Service",parent_namespace="dblin",parent_name="mydb-service",parent_port="5432",parent_section_name="",route_group="",route_kind="default",route_namespace="",route_name="opaq",target_ip="",target_port="5432",error=""} 1

All of the raw data should be accessible in prometheus, however.

olix0r avatar Apr 16 '25 17:04 olix0r

hello thanks for the direction. Makes sense. One more question:

I have created a pod:

kubectl apply -f <filename.yaml>

apiVersion: v1
kind: Pod
metadata:
  name: debug
  namespace: dblin
  labels:
    app: debug
  annotations:
    linkerd.io/inject: enabled
    config.linkerd.io/opaque-ports: "5432"
spec:
  containers:
  - name: psql
    image: postgres:15
    command: ["sleep", "infinity"]
    env:
    - name: PGUSER
      value: postgres
    - name: PGPASSWORD
      value: postgres
    - name: PGDATABASE
      value: mydb

kubectl exec -it debug -n dblin -c psql -- psql -h mydb-service -p 5432 -U postgres -d mydb
SELECT * FROM sensors;

This way the connection remians open. And I see the segdes now from pod debug to pod postgres-0:

linkerd viz edges  po  -n dblin
SRC                           DST                         SRC_NS        DST_NS   SECURED          
debug                         postgres-0                  dblin         dblin    √  

But still I cant see any traffic from debug pod postgres-o pod as this command is still empty linkerd viz top pod/postgres-0 --namespace dblin

could you please suggest why? Thank you

yogenderPalChandra avatar Apr 26 '25 09:04 yogenderPalChandra

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.

stale[bot] avatar Jul 25 '25 16:07 stale[bot]