Memory leak with TriggerClientEvent
What happened?
When sending tables through TriggerClientEvent, memory is leaking. It is probably more noticeable with larger tables.
I'm using TriggerClientEvent a lot and my server RAM usage gradually increases from 500 MB to up to 10-15 GB at the end of the day (FXServer is being restarted daily).
Expected result
When sending table through TriggerClientEvent, allocated memory should be cleared.
Reproduction steps
collectgarbage('stop') print(1, collectgarbage('count'))
local data = {} for i = 1, 100000 do data[i] = i end
print(2, collectgarbage('count'))
for i = 1, 100 do TriggerClientEvent("Test", playerid, data) end
print(3, collectgarbage('count'))
data = nil collectgarbage('collect')
print(4, collectgarbage('count'))
Importancy
Unknown
Area(s)
FXServer
Specific version(s)
Tested on FiveM artifact 12180 and 12651 I have confirmation from @IS4Code that on some ancient artifact from 2022 this doesn't occur
Additional information
The garbage collect clears all the garbage, but RAM usage won't decrease.
Example of my results: [ script:wtls] 1 354979.76171875 [ script:wtls] 2 359075.85546875 [ script:wtls] 3 395079.3125 [ script:wtls] 4 350609.63769531
Example of actual RAM usage:
- 560 MB RAM at the beginning
- 629 MB RAM after execution of the code above
- 580 MB RAM after few seconds and another manual garbage collect (just to be sure)
- 568 MB RAM after resource restart
You can easily repeat the test script and the RAM will gradually increase. I have suspicion, that this memory leak might somehow degrade CPU performance as well?
adding some data here.
server start:
800 MB ~ 60 players
server gets full ~ 1250 players -> 22GB
6 hours are passing by -> 40GB
morning ~ 70 players -> 11.7 GB
so there was a leak of about 10GB over 9 hours.
I'm facing the same problem. It reaches 19GB; as soon as I open the server, it starts increasing by 2 to 4 MB per second as players join.
20 minutes after the first post, it's already at 4GB.
kicked all the players from the server and started stopping each resource one by one to see what was causing the memory consumption. I stopped all the resources, but it was still using 5GB.
Are you both using the TriggerClientEvent frequently or transfering large data with it?
@niCe86 I use it frequently
That's a normal behavior when u stop the gc like u did in provided repro, u should call collectgarbage("restart") to restore.
GC "stop" has absolutely nothing to do with it. It's in the example script solely for the purpose of having control over GC cycles and to demonstrate changes in garbage count. Without GC "stop" the result is the same - the memory leak does still occur.
In my case i have no stop on the garbage collector. only count to log it, and collect.
thats how gb works
AGAIN. Garbage collector has nothing to do with this.
Same Issue
300 stable players and 2 hours uptime
300 stable players and 2 hours uptime
This is more likely a memory leak caused by your resources. It is too big to be related to this issue.
![]()
![]()
300 stable players and 2 hours uptime
This is more likely a memory leak caused by your resources. It is too big to be related to this issue.
So how can I check for memory leaks in my resources? Do you recommend any tools or anything?
Wow only 10-15GB, my server goes up to 25GB of RAM and it restarts every 3 hours btw. We had to upgrade from 32GB to 64GB of RAM to prevent crashing
This is more likely a memory leak caused by your resources. It is too big to be related to this issue.
If it's not related to this issue, then it's related to another FiveM issue, highly unlikely to be a resource, especially because he included the resmon with his resources showing no more than 72MiB of memory usage.
Having simliar issue, have ran profilers, checked network events & everything is fine. My highest running resource on server resmon is es_extended with around the same amount of memory 72mb ish.
Based on what I have been experiencing, this started somewhere around the 10xxx server artifacts. My server had been on 9875 WELL past the 3 month EOL warning, and we kept it there because it was stable. We have since updated from QB-Core to QBX Core (which requires 103xx or newer) and we are seeing the same problems. Mind you, we tried the 10xxx artifacts on QB-Core and experienced the same issues, so I highly doubt it has anything to do with the core.
As player counts lower, memory does not get freed up and then we eventually have voice issues and high packet loss until everything "soft crashes". Sadly the FXServer process does not crash so we cannot get a dump from it, but stability devolves until there are so many back to back network/sync thread hitches that everyone disconnects. Server thread hitches are minor, but we can watch script performance devolve in real time as the console starts throwing errors and slow SQL query times.
I added memory logging to a good chunk of resources and will report back if I can find a specific one that is not properly freeing its memory, but I suspect it's an artifact issue.
For context we hit 160 players a night, but we even observe this behavior in the early hours when there's 20-40 people. Even as the night closes and people start logging off (going from 160 players to 68 players) the memory usage still goes up.
We never max out the RAM on the box itself but the stability still suffers. I am tempted to increase the windows pagefile size dramatically, but I am not even sure the memory is ever swapping to disk. We also have a 10G up/down link and it never gets near 1 Gbps in either direction, so I am confident the problem is not server resources (the hardware itself)/networking.
Based on what I have been experiencing, this started somewhere around the 10xxx server artifacts. My server had been on 9875 WELL past the 3 month EOL warning, and we kept it there because it was stable. We have since updated from QB-Core to QBX Core (which requires 103xx or newer) and we are seeing the same problems. Mind you, we tried the 10xxx artifacts on QB-Core and experienced the same issues, so I highly doubt it has anything to do with the core.
As player counts lower, memory does not get freed up and then we eventually have voice issues and high packet loss until everything "soft crashes". Sadly the FXServer process does not crash so we cannot get a dump from it, but stability devolves until there are so many back to back network/sync thread hitches that everyone disconnects. Server thread hitches are minor, but we can watch script performance devolve in real time as the console starts throwing errors and slow SQL query times.
I added memory logging to a good chunk of resources and will report back if I can find a specific one that is not properly freeing its memory, but I suspect it's an artifact issue.
For context we hit 160 players a night, but we even observe this behavior in the early hours when there's 20-40 people. Even as the night closes and people start logging off (going from 160 players to 68 players) the memory usage still goes up.
We never max out the RAM on the box itself but the stability still suffers. I am tempted to increase the windows pagefile size dramatically, but I am not even sure the memory is ever swapping to disk. We also have a 10G up/down link and it never gets near 1 Gbps in either direction, so I am confident the problem is not server resources (the hardware itself)/networking.
You can check if there's any misuse in your codebase
you might have mistakenly passed QB's Player object (which may contain any reference as __cfx_functionReference ) through TriggerClientEvent,
which could potentially cause this issue.
I encountered a similar situation in my previous environment,
and resolving this part fixed the problem.
stumbled upon this experiencing the same thing. I started logging memory in fivem node, and heapUsed is sensible while rss keeps climbing over time