containerlab
containerlab copied to clipboard
add examples on how to use `pumba netem` for link impairments
ADD: the decision is to document the ways https://github.com/alexei-led/pumba can be used in conjunction with clab, instead of adding a bespoke tc integration
a quick test by one of the colleagues showed that tc is working for vrnetlab based containers, which already warrants a tools tc
command to be considered.
root@node1:/# tc qdisc add dev eth1 root netem delay 100ms
# ping from node 2 to node 1 via node 1 eth 1:
A:admin@node2# ping cafe:affe::101
PING cafe:affe::101 56 data bytes
64 bytes from cafe:affe::101 icmp_seq=1 hlim=64 time=206ms.
64 bytes from cafe:affe::101 icmp_seq=2 hlim=64 time=102ms.
64 bytes from cafe:affe::101 icmp_seq=3 hlim=64 time=102ms.
64 bytes from cafe:affe::101 icmp_seq=4 hlim=64 time=102ms.
64 bytes from cafe:affe::101 icmp_seq=5 hlim=64 time=101ms.
---- cafe:affe::101 PING Statistics ----
5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min = 101ms, avg = 122ms, max = 206ms, stddev = 41.7ms
# deleted delay again from inside ubuntu
root@node1:/# tc qdisc delete dev eth1 root netem delay 100ms
# same ping as before:
A:admin@node2# ping cafe:affe::101
PING cafe:affe::101 56 data bytes
64 bytes from cafe:affe::101 icmp_seq=1 hlim=64 time=1.72ms.
64 bytes from cafe:affe::101 icmp_seq=2 hlim=64 time=1.74ms.
64 bytes from cafe:affe::101 icmp_seq=3 hlim=64 time=1.64ms.
ping aborted by user
---- cafe:affe::101 PING Statistics ----
3 packets transmitted, 3 packets received, 0.00% packet loss
round-trip min = 1.64ms, avg = 1.70ms, max = 1.74ms, stddev = 0.000ms
We need to check if srlinux/ceos containers can leverage the same tc
impairments
This should also work outside the container too, which would be a more generalised approach. I have been using Pumba https://github.com/alexei-led/pumba which sets tc directly. I was testing between ceos and Alpine containers.
That may possibly be limited to certain link types, but would be container agnostic.
This feature would be great for WAN scenarios.
Oh, Pumba looks nice! Thanks @sk2! We could definitely try and see if it has an exposed API for it's netem command in case we would want/need a deeper integration, or simplified installation option
It’s nice but it has an extra feature of setting a time duration after which it reverts the application. This also means it can’t do multiple in parallel.
I have an open request on their GitHub to see if it could just set the tc command and exit (much like tc itself). It does a nice job of figuring out the uuid mapping to determine the interface to apply on.
One thing to note is that it only appears to apply on direction on that interface - so loss/delay will be dependent on whether the traffic is ingress/egress on that interface - so typically that would probably need to be wrapped to apply on each end of a link.
It solves a slightly different problem (in networking we wouldn’t be killing devices entirely so much as introducing cli-like config changes), but it does a superb job of encapsulating all the docker virtual networking logic.
It might even be possible to call the netem module from the cli and embed that.
Typical use cases would be to both set the latency/jitter/loss on initialisation (eg long distance links) but also to vary it at runtime to simulate changing characteristics and seeing how the control plane and applications respond.
So that might motivate both setting it in the yaml and also providing an extension to the cli tool.
I think it had a reasonably permissive software licence.
native tc integration is done in #1453, plus pumba/netem implementation can be found here https://gist.github.com/hellt/136137fcaf8b1a971c76876c03f2bdb1
Nice work, thanks!