Matthew Cather
Matthew Cather
I have run into a similar issue before. Along the same lines as what Bruce is saying about the same subnet, the Linux networking stack will behave in unexpected ways...
You can also get a similar crash on the client side [here](https://github.com/esnet/iperf/blob/master/src/iperf_client_api.c#L803) if you queue up a bunch of client side tests (i.e. `for i in $(seq 100); do iperf3...
It does not. You can see your changes working correctly in test # 1 but it still segfaults in test # 2. ( I added an assert to show where...
> Thanks for testing. The second test failed because the termination happened before all threads where created. I enhanced PR #1654 to also handle this case. **Can you check if...
Having read through the patch (haven't run anything yet) and associated Linux documentation: - The `MSG_TRUNC` seems to affect TCP and UDP different. You have things set-up in a way...
> Why "things set-up in a way that I would expect only TCP throughput to be improved"? Although practically that may be the case, what is wrong in the implementation...
> Forgot to take --file into account. Without this option, the sent buffer is fixed, so there is no need to handle the kernel notifications. To simplify the initial solution,...
I agree that `MSG_ZEROCOPY` won't work with UDP without notifications. The math for `Nread`/`Nrecv` took me a second to reason through (because of the double negative). I.e With `UDP_MAX` reads...
@RizziMau does it happen if you don't use `--forceflush`? That would indicate there is some deadlock condition with the `print_mutex`. https://github.com/esnet/iperf/blob/master/src/iperf_api.c#L5119 On a side note, with your work around, you...
> > @RizziMau does it happen if you don't use `--forceflush`? That would indicate there is some deadlock condition with the `print_mutex`. https://github.com/esnet/iperf/blob/master/src/iperf_api.c#L5119 > > @MattCatz Currently I'm using the...