enet
enet copied to clipboard
backlogs at higher throughput
This could be me, but it seems like enet gets easily overwhelmed if service is called with too many incoming packets to process. At around 2k packets per second with service called in a tight loop, it can't keep up and it just degrades as the backlog gets larger.
Or at least that's what appears to be happening. Any real world use cases to refute this so I know it's just me?
How large the packets that captured with a packet sniffer?
So EnetHost.PacketsReceived shows the right number. It's the received events that aren't keeping up. Just to confirm I'm calling service in a tight loop with a timeout of 0.
packets are small and sent with flags None
The thing is, ENet will try to combine packets if you are not flushing the host manually and this may cause performance issues. The original implementation has some weak spots there.
Call enet_host_check_events repeatedly after each call to enet_host_service
I did a test, and I found some strange results. I have setup client and server in tight loop, client is flooding packets, and I do my own round trip timing on client when it receives the packet, echo from server
while (true)
enet_host_service (server, &event, 0);
// server will broadcast receive packets
while (true)
enet_host_service (client, &event, 0);
// client send packet
// log round trip time from received packet
I'm running both on local, the round trip is around 1ms. I have not logged to cout every frame, but stored to memory). However, if I change the client to send two packets instead, the round keeps increasing linearly.
packet 1 time 1ms .. packet 12000 time 681 ms
I changed the server call enet_host_check_events in same frame to process multiple packets
for (int i = 0; i < 5; ++i)
int eventStatus = 0;
if (0 == i) eventStatus = enet_host_service (server_, &event, 0);
else eventStatus = enet_host_check_events (server_, &event);
// broadcast received packet
if (0 == eventStatus) break;
I see enet_host_check_events is being called so it is processing multiple packets per frame, but the round trip time is still increasingly linearly.
Can you provide your tests with code to reproduce this?
Edit: Simpler sample to show the problem https://filedn.com/l3TGy7Y83c247u0RDYa9fkp/temp/code/TestEnetLoopPacketLoss.zip
Repro
- Start server
- Start client
- Holde 1 to flood messages for couple of seconds
- Press L to show log on client
I can't find anything that related to ENet in this project. Searching for enet_host_service()
across the files doesn't return results.
sorry, wrong project. updated link
You need to call enet_host_check_events until all events are exhausted. If you merely stop at 5 iterations, you are going to eventually hit a backlog under high load.
On Mon, Mar 30, 2020 at 12:56 AM imtrobin [email protected] wrote:
sorry, wrong project. updated link
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/lsalzman/enet/issues/103#issuecomment-605781847, or unsubscribe https://github.com/notifications/unsubscribe-auth/AALDVULGTEHFMTCM4ST2MDLRKAQ6LANCNFSM4GT7MX3Q .
@imtrobin A while ago, I've created an application for stress-testing ENet with visualization. You can find there how it should be done properly and test ENet under really high load with batched packets or huge messages with redundant payload.
Ok, I modified the server loop to yours. It works fine now. However, when I try to mix unreliable and reliable together, I get a lot of packet drops
client // this is fine
send reliable
send unreliable
client // this drops packets massively
send unreliable
send reliable
I guess the moral is not to mix the channel with unreliable/unreliable
Sorry, I spoke too soon, I forgot I enabled the 120 fps cap. When I remove that to flood enet, the result is the same, the delay keeps increasing.
My bad, I modified the server side only, I did not modify the client side. When I changed both, it works well.
So I notice there a spike every couple of seconds to 4ms, I updated the above project, by changing the memory log from ostringstream to a preallocated string. I've taken care not to do any memory allocation, now the round trip shows 0.2-0.5ms, but there is still an occasional spike to 2ms that happens every couple of seconds.
Received:4173 hello from client 4173 timeMs:0.515400
Received:4174 hello from client 4174 timeMs:0.267300
Received:4175 hello from client 4175 timeMs:2.113100
Received:4176 hello from client 4176 timeMs:2.111500
Received:4177 hello from client 4177 timeMs:1.881300
Received:4178 hello from client 4178 timeMs:2.138200
Received:4179 hello from client 4179 timeMs:2.134900
Received:4180 hello from client 4180 timeMs:0.300400
Received:4181 hello from client 4181 timeMs:0.529300
Edit: simpler sample which has same problem https://filedn.com/l3TGy7Y83c247u0RDYa9fkp/temp/code/TestEnetLoopPacketLoss.zip
Also keep in mind that OS MTU won't typically be more than 1500 bytes, this is the largest Ethernet allows. If you send anything bigger than this the OS will split it into multiple smaller packets. Being just 1 byte off can double the time for the 1501 byte packet. The same will also be the case for your routers, proxies, as nodes within a VPN. Higher MTU means more memory foot print for queues and that's where any point in the route can start dropping packets when they get busy. Higher MTU doesn't mean higher performance on busy networks. And though IPv6 allows jumbo packets this can come with a serious setback that every point within a route must support jumbo packets as well. IPv4 allows a do-not-fragment flag (not all nodes support it), but that didn't carry over to IPv6. And assuming IPv6 will always allow bigger is like assuming every Mustang is manufactured with a V8. It might have been designed from the start for greater performance, but there's still economy options available.