xk6-disruptor
xk6-disruptor copied to clipboard
kubectl port forwarding connection reset when injecting faults
#231 introduced the capability of intercepting traffic sent to a pod by means of kubectl port-forwarding.
However, in practice, if this traffic is disrupted by means of a fault injection, the port forwarding is terminated with this error
E0714 11:50:18.374294 63287 portforward.go:407] an error occurred forwarding 38000 -> 80: error forwarding port 80 to pod b426643ce3e23e3452ed12a52788b8ad4e1ea9644782da4d87b5a21e13adc9d6, uid : failed to execute portforward in network namespace "/var/run/netns/cni-b257d047-5088-04df-36bd-2a17e4aac7a4": read tcp4 127.0.0.1:60908->127.0.0.1:80: read: connection reset by peer
E0714 11:50:18.376108 63287 portforward.go:233] lost connection to pod
This is due to the iptables
rule that the agent uses to force clients to reconnect and allow the traffic redirection rule to take effect. Without this connection reset, connections established before the traffic redirection rule is inserted wouldn't be affected by the fault injection.
This is a known kubectl issue wich unfortunately has not been fixed despite this PR https://github.com/kubernetes/kubernetes/pull/117493.
A workaround to this problem is to ensure the test doesn't make any request until the fault injection is in place.
This is a pity as personally I wouldn't find comfortable recommending port-forward in guides if it is this brittle. I'll try to reproduce this on my end just to see if a miraculous idea pops somehow, but my current understanding is that we might need to stick with nodePort/loadBalancer services for now :(