Paul Holzinger
Paul Holzinger
I think using the default route mtu makes certainly sense in almost all cases. In the meantime there was also reported the opposite use case (wanting a higher default mtu...
@mangelajo use the RHEL project and select podman as component
For the networking case one thing that worked for me is to create /etc/systemd/system/[email protected]/override.conf with ``` [Unit] After=network-online.target Wants=network-online.target ``` This will add the dependency on the user session service...
Now I am not an expert in how this works but shouldn't gvproxy just retry on ENOBUFS, also I would have assumed to sendto call to block instead of returning...
I assume you need a high speed connection to trigger it, maybe try to use iperf3 between host and VM.
On the macos host run `iperf3 -s` to start the server Then in another terminal run iperf3 in a container as client, using `--network host` to not get any slow...
@balajiv113 Using my reproducer from above that still fails in the same way, this is the gvproxy log: ``` time="2024-07-02T14:06:37+02:00" level=info msg="gvproxy version v0.7.3-58-g9a4a0c4" time="2024-07-02T14:06:37+02:00" level=info msg="waiting for clients..." time="2024-07-02T14:06:37+02:00"...
@balajiv113 Your patch seems to work but it effects performance a lot. I am down to ~600 Mbits from ~1.9 Gbits before. Also looking at your code this will always...
@balajiv113 I am thinking this: https://github.com/Luap99/gvisor-tap-vsock/commit/5806d216c22671b0ae8c21f5653e01d72fdbeb76 It seem to work for me with transfers around 2 Gbits.
I can open PR with that if we agree that this is the right fix, I have to check if this still complies for windows before that