fivem icon indicating copy to clipboard operation
fivem copied to clipboard

Memory leak with TriggerClientEvent

Open niCe86 opened this issue 11 months ago • 19 comments

What happened?

When sending tables through TriggerClientEvent, memory is leaking. It is probably more noticeable with larger tables.

I'm using TriggerClientEvent a lot and my server RAM usage gradually increases from 500 MB to up to 10-15 GB at the end of the day (FXServer is being restarted daily).

Expected result

When sending table through TriggerClientEvent, allocated memory should be cleared.

Reproduction steps

collectgarbage('stop') print(1, collectgarbage('count'))

local data = {} for i = 1, 100000 do data[i] = i end

print(2, collectgarbage('count'))

for i = 1, 100 do TriggerClientEvent("Test", playerid, data) end

print(3, collectgarbage('count'))

data = nil collectgarbage('collect')

print(4, collectgarbage('count'))

Importancy

Unknown

Area(s)

FXServer

Specific version(s)

Tested on FiveM artifact 12180 and 12651 I have confirmation from @IS4Code that on some ancient artifact from 2022 this doesn't occur

Additional information

The garbage collect clears all the garbage, but RAM usage won't decrease.

Example of my results: [ script:wtls] 1 354979.76171875 [ script:wtls] 2 359075.85546875 [ script:wtls] 3 395079.3125 [ script:wtls] 4 350609.63769531

Example of actual RAM usage:

  • 560 MB RAM at the beginning
  • 629 MB RAM after execution of the code above
  • 580 MB RAM after few seconds and another manual garbage collect (just to be sure)
  • 568 MB RAM after resource restart

You can easily repeat the test script and the RAM will gradually increase. I have suspicion, that this memory leak might somehow degrade CPU performance as well?

niCe86 avatar Jan 30 '25 22:01 niCe86

adding some data here.

server start:

Image

800 MB ~ 60 players

Image

server gets full ~ 1250 players -> 22GB

Image

6 hours are passing by -> 40GB

Image

morning ~ 70 players -> 11.7 GB

so there was a leak of about 10GB over 9 hours.

d22tny avatar Feb 03 '25 15:02 d22tny

I'm facing the same problem. It reaches 19GB; as soon as I open the server, it starts increasing by 2 to 4 MB per second as players join. Image

20 minutes after the first post, it's already at 4GB. Image

kicked all the players from the server and started stopping each resource one by one to see what was causing the memory consumption. I stopped all the resources, but it was still using 5GB. Image

joaoconti avatar Feb 03 '25 18:02 joaoconti

Are you both using the TriggerClientEvent frequently or transfering large data with it?

niCe86 avatar Feb 03 '25 21:02 niCe86

@niCe86 I use it frequently

joaoconti avatar Feb 03 '25 23:02 joaoconti

That's a normal behavior when u stop the gc like u did in provided repro, u should call collectgarbage("restart") to restore.

Yum1x avatar Feb 04 '25 05:02 Yum1x

GC "stop" has absolutely nothing to do with it. It's in the example script solely for the purpose of having control over GC cycles and to demonstrate changes in garbage count. Without GC "stop" the result is the same - the memory leak does still occur.

niCe86 avatar Feb 04 '25 11:02 niCe86

In my case i have no stop on the garbage collector. only count to log it, and collect.

d22tny avatar Feb 04 '25 13:02 d22tny

thats how gb works

topordenis avatar Feb 10 '25 09:02 topordenis

AGAIN. Garbage collector has nothing to do with this.

niCe86 avatar Feb 10 '25 13:02 niCe86

Same Issue

dendenknows avatar Feb 13 '25 12:02 dendenknows

Image

Image

Image

300 stable players and 2 hours uptime

dendenknows avatar Feb 13 '25 12:02 dendenknows

Image

Image

Image

300 stable players and 2 hours uptime

This is more likely a memory leak caused by your resources. It is too big to be related to this issue.

d22tny avatar Feb 13 '25 16:02 d22tny

Image Image Image 300 stable players and 2 hours uptime

This is more likely a memory leak caused by your resources. It is too big to be related to this issue.

So how can I check for memory leaks in my resources? Do you recommend any tools or anything?

dendenknows avatar Feb 14 '25 11:02 dendenknows

Wow only 10-15GB, my server goes up to 25GB of RAM and it restarts every 3 hours btw. We had to upgrade from 32GB to 64GB of RAM to prevent crashing

Sasino97 avatar Mar 05 '25 07:03 Sasino97

This is more likely a memory leak caused by your resources. It is too big to be related to this issue.

If it's not related to this issue, then it's related to another FiveM issue, highly unlikely to be a resource, especially because he included the resmon with his resources showing no more than 72MiB of memory usage.

Sasino97 avatar Mar 05 '25 07:03 Sasino97

Having simliar issue, have ran profilers, checked network events & everything is fine. My highest running resource on server resmon is es_extended with around the same amount of memory 72mb ish.

itsAdminPlus avatar Mar 06 '25 12:03 itsAdminPlus

Based on what I have been experiencing, this started somewhere around the 10xxx server artifacts. My server had been on 9875 WELL past the 3 month EOL warning, and we kept it there because it was stable. We have since updated from QB-Core to QBX Core (which requires 103xx or newer) and we are seeing the same problems. Mind you, we tried the 10xxx artifacts on QB-Core and experienced the same issues, so I highly doubt it has anything to do with the core.

As player counts lower, memory does not get freed up and then we eventually have voice issues and high packet loss until everything "soft crashes". Sadly the FXServer process does not crash so we cannot get a dump from it, but stability devolves until there are so many back to back network/sync thread hitches that everyone disconnects. Server thread hitches are minor, but we can watch script performance devolve in real time as the console starts throwing errors and slow SQL query times.

I added memory logging to a good chunk of resources and will report back if I can find a specific one that is not properly freeing its memory, but I suspect it's an artifact issue.

For context we hit 160 players a night, but we even observe this behavior in the early hours when there's 20-40 people. Even as the night closes and people start logging off (going from 160 players to 68 players) the memory usage still goes up.

We never max out the RAM on the box itself but the stability still suffers. I am tempted to increase the windows pagefile size dramatically, but I am not even sure the memory is ever swapping to disk. We also have a 10G up/down link and it never gets near 1 Gbps in either direction, so I am confident the problem is not server resources (the hardware itself)/networking.

sableeyed avatar Apr 07 '25 18:04 sableeyed

Based on what I have been experiencing, this started somewhere around the 10xxx server artifacts. My server had been on 9875 WELL past the 3 month EOL warning, and we kept it there because it was stable. We have since updated from QB-Core to QBX Core (which requires 103xx or newer) and we are seeing the same problems. Mind you, we tried the 10xxx artifacts on QB-Core and experienced the same issues, so I highly doubt it has anything to do with the core.

As player counts lower, memory does not get freed up and then we eventually have voice issues and high packet loss until everything "soft crashes". Sadly the FXServer process does not crash so we cannot get a dump from it, but stability devolves until there are so many back to back network/sync thread hitches that everyone disconnects. Server thread hitches are minor, but we can watch script performance devolve in real time as the console starts throwing errors and slow SQL query times.

I added memory logging to a good chunk of resources and will report back if I can find a specific one that is not properly freeing its memory, but I suspect it's an artifact issue.

For context we hit 160 players a night, but we even observe this behavior in the early hours when there's 20-40 people. Even as the night closes and people start logging off (going from 160 players to 68 players) the memory usage still goes up.

We never max out the RAM on the box itself but the stability still suffers. I am tempted to increase the windows pagefile size dramatically, but I am not even sure the memory is ever swapping to disk. We also have a 10G up/down link and it never gets near 1 Gbps in either direction, so I am confident the problem is not server resources (the hardware itself)/networking.

You can check if there's any misuse in your codebase you might have mistakenly passed QB's Player object (which may contain any reference as __cfx_functionReference ) through TriggerClientEvent, which could potentially cause this issue. I encountered a similar situation in my previous environment, and resolving this part fixed the problem.

st860923 avatar Apr 08 '25 12:04 st860923

stumbled upon this experiencing the same thing. I started logging memory in fivem node, and heapUsed is sensible while rss keeps climbing over time

sangyookm avatar Oct 06 '25 11:10 sangyookm