fivem icon indicating copy to clipboard operation
fivem copied to clipboard

Widespread Client Crash on Game Build 3095 (0x6D Error - Blue-Hot-Jupiter)

Open Neaw92 opened this issue 8 months ago • 41 comments

What happened?

We are experiencing widespread crashes on our FiveM server (build 3095). The majority of crashes (76.4% of total crashes) are occurring with the error GTA5_b3258.exe!sub_1407EFA60 (0x6d). The crash is consistently affecting multiple players, causing disruptions during gameplay. Despite using the latest server artifact (14164), the crashes continue to occur.

Expected result

The server should operate smoothly without frequent crashes, and players should not experience random disconnects or crashes associated with the GTA5_b3258.exe!sub_1407EFA60 (0x6d) error.

Reproduction steps

Players join the server

Players engage in normal activities

A significant portion of players experience crashes with the error: GTA5_b3258.exe!sub_1407EFA60 (0x6d), affecting 76.4% of crashes.

This issue persists even after multiple server updates.

Importancy

Crash

Area(s)

FiveM

Specific version(s)

FiveM Game Build: 3095 - Artifact Version 14164 - Windows Server 2022

Additional information

The crashes are affecting a significant percentage of players (76.4% of crashes are caused by GTA5_b3258.exe!sub_1407EFA60 (0x6d)).

No modders or cheaters have been identified.

Logs, including .dmp files and citizenfx_log.log, consistently show the crash occurring at the same trace location: sub_1407EFA60 (0x6d) within GTA5_b3258.exe.

The issue continues to occur even after server updates and has been impacting gameplay for all users.

Image

CfxCrashDump_2025_04_15_00_08_00 (1).zip

CfxCrashDump_2025_04_14_00_20_34.zip

CfxCrashDump_2025_04_11_01_17_30.zip

CfxCrashDump_2025_04_10_03_44_33.zip

CfxCrashDump_2025_04_04_02_48_35.zip

CfxCrashDump_2025_04_09_00_20_54.zip

CfxCrashDump_2025_03_17_21_42_30.zip

CfxCrashDump_2025_03_16_00_08_46.zip

CfxCrashDump_2025_03_15_22_33_37.zip

CfxCrashDump_2025_03_15_01_39_56.zip

Neaw92 avatar Apr 10 '25 02:04 Neaw92

Hello Same thing here, even if I rollback to fxServer 12882 and DLC to 3258 the problem persists.

CfxCrashDump_2025_05_02_12_30_32.zip

CfxCrashDump_2025_05_01_21_36_09.zip

CfxCrashDump_2025_05_01_19_40_38.zip

deathart avatar May 02 '25 12:05 deathart

We've also been having this issue for weeks. Playing on latest did fix this issue for a while but for the past week its been a lot worse than its been. We use build 3095 not sure why it says 3258

Image

ThunderNLRP avatar May 10 '25 14:05 ThunderNLRP

Sorry to interrupt from the side, but with this change, is there a chance that a different build than the one specified on the server might be used or displayed? @Nobelium-cfx https://github.com/citizenfx/fivem/commit/643a12c8a88c347b2621125aa9455304a50ef2cc or https://github.com/citizenfx/fivem/commit/c1cff3d4ebd21820d2840ec672e26d3d1bc6d8bb

mori151 avatar May 11 '25 04:05 mori151

Sorry to interrupt from the side, but with this change, is there a chance that a different build than the one specified on the server might be used or displayed? @Nobelium-cfx 643a12c or c1cff3d

Yes, thats the case. After server build 12872 - is set to false by default. You can read more about it here: https://docs.fivem.net/docs/server-manual/server-commands/#sv_replaceexetoswitchbuilds-newvalue .

Nobelium-cfx avatar May 11 '25 09:05 Nobelium-cfx

So i need to set sv_replaceExeToSwitchBuilds to true to fix my issue?

ThunderNLRP avatar May 12 '25 15:05 ThunderNLRP

I doubt it would fix the issue but you can try. The issue likely seems to stem from bad map mods - the crash is somewhere in the streaming code, when the game attempts to delete an object, and the logs are filled with errors like this

[   3300172] [b3095_GTAProce] ResourcePlacementThr/ Physics validation failed for asset prop_rock_1_f.ydr.
[   3300172] [b3095_GTAProce] ResourcePlacementThr/ This asset is **INVALID**, but we've fixed it for this load. Please fix the exporter used to export it.
[   3300172] [b3095_GTAProce] ResourcePlacementThr/ Details: Poly 0 edge reference is invalid. It leads to vertex 65522, when there are only 315 vertices.
[   3310047] [b3095_GTAProce] ResourcePlacementThr/ ^3Warning: Texture script_rt_dials_race (in txd rrcoquette82.ytd) was set to a compressed texture format, but 'script_rt' textures should always be uncompressed.
[   3310047] [b3095_GTAProce] ResourcePlacementThr/ This file was likely processed by a bad tool. To improve load performance and reduce the risk of it crashing, fix or update the tool used.^7
[   3313281] [b3095_GTAProce] ResourcePlacementThr/ Physics validation failed for asset prop_billboard_03.ydr.
[   3313281] [b3095_GTAProce] ResourcePlacementThr/ This asset is **INVALID**, but we've fixed it for this load. Please fix the exporter used to export it.
[   3313281] [b3095_GTAProce] ResourcePlacementThr/ Details: Poly 2 edge reference is invalid. It leads to vertex 65485, when there are only 201 vertices.
[   3316328] [b3095_GTAProce] ResourcePlacementThr/ Physics validation failed for asset prop_radiomast02.ydr.
[   3316328] [b3095_GTAProce] ResourcePlacementThr/ This asset is **INVALID**, but we've fixed it for this load. Please fix the exporter used to export it.
[   3316328] [b3095_GTAProce] ResourcePlacementThr/ Details: Poly 0 edge reference is invalid. It leads to vertex 65228, when there are only 220 vertices.
[   3318969] [b3095_GTAProce] ResourcePlacementThr/ Physics validation failed for asset prop_rock_1_f.ydr.
[   3318969] [b3095_GTAProce] ResourcePlacementThr/ This asset is **INVALID**, but we've fixed it for this load. Please fix the exporter used to export it.
[   3318969] [b3095_GTAProce] ResourcePlacementThr/ Details: Poly 0 edge reference is invalid. It leads to vertex 65522, when there are only 315 vertices.
[   3319703] [b3095_GTAProce] ResourcePlacementThr/ Physics validation failed for asset prop_runlight_r.ydr.
[   3319703] [b3095_GTAProce] ResourcePlacementThr/ This asset is **INVALID**, but we've fixed it for this load. Please fix the exporter used to export it.
[   3319703] [b3095_GTAProce] ResourcePlacementThr/ Details: Poly 2 edge reference is invalid. It leads to vertex 344, when there are only 266 vertices.
[   3323937] [b3095_GTAProce] ResourcePlacementThr/ Physics validation failed for asset prop_food_bs_soda_01.ydr.
[   3323937] [b3095_GTAProce] ResourcePlacementThr/ This asset is **INVALID**, but we've fixed it for this load. Please fix the exporter used to export it.
[   3323937] [b3095_GTAProce] ResourcePlacementThr/ Details: Poly 3 edge reference is invalid. It leads to vertex 10, when there are only 8 vertices.
[   3327719] [b3095_GTAProce] ResourcePlacementThr/ Physics validation failed for asset prop_food_bs_soda_01.ydr.
[   3327719] [b3095_GTAProce] ResourcePlacementThr/ This asset is **INVALID**, but we've fixed it for this load. Please fix the exporter used to export it.
[   3327719] [b3095_GTAProce] ResourcePlacementThr/ Details: Poly 3 edge reference is invalid. It leads to vertex 10, when there are only 8 vertices.

Gogsi avatar May 12 '25 15:05 Gogsi

If it was a bad map that was causing an issue wouldn't the crash happen in the area of the map? Since I've been getting this crash its all over the map doesn't matter where you are.

ThunderNLRP avatar May 12 '25 15:05 ThunderNLRP

We’re encountering this error frequently across the map, although some players never experience it at all. We’ve even tried the NVIDIA Game Filter fix, but it only improves the situation for certain players.

We also occasionally see the issue when props attached to the back of players enter vehicles. However, it tends to occur most commonly near the Vanilla Unicorn location.

iplayer1337fivem avatar May 12 '25 15:05 iplayer1337fivem

I got some information on this crash today. This apparently is caused from your maxing out your texture buffer. The suggestion I got was to turn down your distance scaling, extending distance scaling, and potentially turning down your extended texture budget. This happens regardless of the video memory your pc has.

ThunderNLRP avatar May 12 '25 23:05 ThunderNLRP

Okay, I had a player crashing like this every 5-10 minutes permanently. Now the player isnt getting this erros after 2 big changes. Roll back with nvidia drivers to december, and pure level 1

iplayer1337fivem avatar May 18 '25 04:05 iplayer1337fivem

I doubt it would fix the issue but you can try. The issue likely seems to stem from bad map mods - the crash is somewhere in the streaming code, when the game attempts to delete an object, and the logs are filled with errors like this

[   3300172] [b3095_GTAProce] ResourcePlacementThr/ Physics validation failed for asset prop_rock_1_f.ydr.
[   3300172] [b3095_GTAProce] ResourcePlacementThr/ This asset is **INVALID**, but we've fixed it for this load. Please fix the exporter used to export it.
[   3300172] [b3095_GTAProce] ResourcePlacementThr/ Details: Poly 0 edge reference is invalid. It leads to vertex 65522, when there are only 315 vertices.
[   3310047] [b3095_GTAProce] ResourcePlacementThr/ ^3Warning: Texture script_rt_dials_race (in txd rrcoquette82.ytd) was set to a compressed texture format, but 'script_rt' textures should always be uncompressed.
[   3310047] [b3095_GTAProce] ResourcePlacementThr/ This file was likely processed by a bad tool. To improve load performance and reduce the risk of it crashing, fix or update the tool used.^7
[   3313281] [b3095_GTAProce] ResourcePlacementThr/ Physics validation failed for asset prop_billboard_03.ydr.
[   3313281] [b3095_GTAProce] ResourcePlacementThr/ This asset is **INVALID**, but we've fixed it for this load. Please fix the exporter used to export it.
[   3313281] [b3095_GTAProce] ResourcePlacementThr/ Details: Poly 2 edge reference is invalid. It leads to vertex 65485, when there are only 201 vertices.
[   3316328] [b3095_GTAProce] ResourcePlacementThr/ Physics validation failed for asset prop_radiomast02.ydr.
[   3316328] [b3095_GTAProce] ResourcePlacementThr/ This asset is **INVALID**, but we've fixed it for this load. Please fix the exporter used to export it.
[   3316328] [b3095_GTAProce] ResourcePlacementThr/ Details: Poly 0 edge reference is invalid. It leads to vertex 65228, when there are only 220 vertices.
[   3318969] [b3095_GTAProce] ResourcePlacementThr/ Physics validation failed for asset prop_rock_1_f.ydr.
[   3318969] [b3095_GTAProce] ResourcePlacementThr/ This asset is **INVALID**, but we've fixed it for this load. Please fix the exporter used to export it.
[   3318969] [b3095_GTAProce] ResourcePlacementThr/ Details: Poly 0 edge reference is invalid. It leads to vertex 65522, when there are only 315 vertices.
[   3319703] [b3095_GTAProce] ResourcePlacementThr/ Physics validation failed for asset prop_runlight_r.ydr.
[   3319703] [b3095_GTAProce] ResourcePlacementThr/ This asset is **INVALID**, but we've fixed it for this load. Please fix the exporter used to export it.
[   3319703] [b3095_GTAProce] ResourcePlacementThr/ Details: Poly 2 edge reference is invalid. It leads to vertex 344, when there are only 266 vertices.
[   3323937] [b3095_GTAProce] ResourcePlacementThr/ Physics validation failed for asset prop_food_bs_soda_01.ydr.
[   3323937] [b3095_GTAProce] ResourcePlacementThr/ This asset is **INVALID**, but we've fixed it for this load. Please fix the exporter used to export it.
[   3323937] [b3095_GTAProce] ResourcePlacementThr/ Details: Poly 3 edge reference is invalid. It leads to vertex 10, when there are only 8 vertices.
[   3327719] [b3095_GTAProce] ResourcePlacementThr/ Physics validation failed for asset prop_food_bs_soda_01.ydr.
[   3327719] [b3095_GTAProce] ResourcePlacementThr/ This asset is **INVALID**, but we've fixed it for this load. Please fix the exporter used to export it.
[   3327719] [b3095_GTAProce] ResourcePlacementThr/ Details: Poly 3 edge reference is invalid. It leads to vertex 10, when there are only 8 vertices.

If these crashes are caused by bad map assets like you said, how are we supposed to fix them when a lot of our maps are CFX escrowed? We cannot use YBN-YMAP Mover or anything like that if the map is escrowed.

Any ideas on what to do in that case?

Neaw92 avatar Jun 04 '25 22:06 Neaw92

Really looking for insight here on this too, previous game builds did not have this crash near as much with my testing, but recently there have been tons of crash reports

IMCraytex avatar Jun 13 '25 11:06 IMCraytex

same crash here too [ 196641] [ DumpServer] 61156/ Process crash captured. Crash dialog content: [ 196641] [ DumpServer] 61156/ GTA5_b3258.exe!sub_1407EFA60 (0x6d) [ 196641] [ DumpServer] 61156/ An error at GTA5_b3258.exe!sub_1407EFA60 (0x6d) caused FiveM to stop working. A crash report is being uploaded to the FiveM developers. [ 196641] [ DumpServer] 61156/ [ 196641] [ DumpServer] 61156/ Legacy crash hash: blue-hot-jupiter [ 196641] [ DumpServer] 61156/ Stack trace: [ 196641] [ DumpServer] 61156/ GTA5_b3258.exe!sub_1407EFA60 (0x6d) [ 196641] [ DumpServer] 61156/ GTA5_b3258.exe!sub_1407F18E0 (0x220) [ 196641] [ DumpServer] 61156/ GTA5_b3258.exe!sub_1407FF0F4 (0x129) [ 196641] [ DumpServer] 61156/ GTA5_b3258.exe!sub_1407FF0DC (0xf) [ 196641] [ DumpServer] 61156/ GTA5_b3258.exe!sub_14165E3C0 (0x68) [ 196641] [ DumpServer] 61156/ GTA5_b3258.exe!sub_141648E34 (0x257) [ 196641] [ DumpServer] 61156/ GTA5_b3258.exe!sub_141656984 (0x6d) [ 196641] [ DumpServer] 61156/ [ 198735] [ DumpServer] 61156/ Crash report service returned si-62e3780b33684781a01cb41dce441fb1

juice-me avatar Jun 22 '25 02:06 juice-me

may be related but is 0x6c not 0x6d

[ 3398641] [ DumpServer] 51876/ Process crash captured. Crash dialog content: [ 3398656] [ DumpServer] 51876/ KERNELBASE.dll!RaiseException (0x6c) [ 3398656] [ DumpServer] 51876/ An error at KERNELBASE.dll!RaiseException (0x6c) caused FiveM to stop working. A crash report is being uploaded to the FiveM developers. [ 3398656] [ DumpServer] 51876/ Stack trace: [ 3398656] [ DumpServer] 51876/ KERNELBASE.dll!RaiseException (0x6c) [ 3398656] [ DumpServer] 51876/ libcef.dll!base::internal::OnNoMemoryInternal (0x2a) (memory.cc:42) [ 3398656] [ DumpServer] 51876/ libcef.dll!base::TerminateBecauseOutOfMemory (0x8) (memory.cc:69) [ 3398656] [ DumpServer] 51876/ libcef.dll!partition_alloc::internal::OnNoMemory (0x14) (oom.cc:19) [ 3398656] [ DumpServer] 51876/ libcef.dll!blink::ReportV8OOMError (0x11b) (v8_initializer.cc:690) [ 3398656] [ DumpServer] 51876/ libcef.dll!v8::Utils::ReportOOMFailure (0x22) (api.cc:355) [ 3398656] [ DumpServer] 51876/ libcef.dll!v8::internal::V8::FatalProcessOutOfMemory (0x355) (api.cc:311) [ 3398656] [ DumpServer] 51876/

juice-me avatar Jun 22 '25 05:06 juice-me

Have started receiving this crash on an amd (cpu/gpu) system so it cant be because of nvidia drivers

Kc2693 avatar Jul 06 '25 03:07 Kc2693

So is there any update on this?

qalle-git avatar Jul 20 '25 17:07 qalle-git

Still a pretty big problem, any news? Seemingly comes and goes where we will see a sharp drop to bluehotjupiter then a week later a sharp increase. without any asset changes on the server

IMCraytex avatar Jul 27 '25 20:07 IMCraytex

still a major crash awaiting a solution....

juice-me avatar Jul 27 '25 22:07 juice-me

https://github.com/citizenfx/fivem/pull/3548 this should in theory solve this issue

ook3D avatar Jul 27 '25 22:07 ook3D

Aaah! thats exciting! Anything we can do on our end in the meantime like patching the river entities on our server?

IMCraytex avatar Jul 28 '25 04:07 IMCraytex

#3548 this should in theory solve this issue

Has this issue been fully fixed yet? We’re still experiencing the crash on b18270 / Windows with the latest artifact.

Neaw92 avatar Aug 14 '25 00:08 Neaw92

#3548 this should in theory solve this issue

still crashing here too

juice-me avatar Aug 15 '25 22:08 juice-me

can confirm this still happens!

talonlzr avatar Aug 25 '25 20:08 talonlzr

It’s been over 4 months since this was reported, and the 0x6D / blue-hot-jupiter crash remains one of the most frequent causes of client instability. The stack traces are consistent (sub_1407EFA60), the crash hash is consistent (blue-hot-jupiter), and it has been reproduced across multiple independent servers.

Despite multiple logs, dumps, and confirmations provided, there has been no documented root cause, no confirmed resolution, and no communicated timeline. The issue continues to appear regularly in current builds.

The significance of this crash lies not only in its frequency but in its impact: months of consistent reproduction across servers erode stability and directly affect player retention. Even strong communities struggle when crashes dominate the client experience, making this one of the most consequential issues facing the platform.

Given the duration and scale, it would be valuable for someone on the Cfx team — @Nobelium-cfx @Gogsi — to take a deeper look into this crash and provide an update on where it currently stands.

juice-me avatar Aug 28 '25 00:08 juice-me

It should be noted that gogsi is not part of the cfx team

ook3D avatar Aug 28 '25 00:08 ook3D

Appreciate the clarification @ook3D. Since you’re closer to the project — can you confirm whether anyone on the actual Cfx team is actively looking into the 0x6D / blue-hot-jupiter crash?

It’s been 4+ months with consistent reproduction, but no confirmed root cause or resolution.

juice-me avatar Aug 28 '25 00:08 juice-me

Appreciate the clarification @ook3D. Since you’re closer to the project — can you confirm whether anyone on the actual Cfx team is actively looking into the 0x6D / blue-hot-jupiter crash?

It’s been 4+ months with consistent reproduction, but no confirmed root cause or resolution.

🤷

ook3D avatar Aug 28 '25 00:08 ook3D

Appreciate the clarification @ook3D. Since you’re closer to the project — can you confirm whether anyone on the actual Cfx team is actively looking into the 0x6D / blue-hot-jupiter crash? It’s been 4+ months with consistent reproduction, but no confirmed root cause or resolution.

🤷

🤦

juice-me avatar Aug 28 '25 00:08 juice-me

I've deleted my previous comment. I misread the issue and thought it was about #3079, where it was in fact mentioned that TerminateThread causes issues

Gogsi avatar Aug 28 '25 17:08 Gogsi

This one is getting worse and worse, seemingly. We've tried most ways of remedying it but it seems related to texture loading failing maybe, so possibly too much load on the texture loading/mapping? on clients which have not yet put their extended texture budget to around 70% (which seems like the sweetspot for some reason), this happens immediately on 2025-client builds. With the setting turned up to around 70% it will only happen after some time.

talonlzr avatar Sep 05 '25 06:09 talonlzr