Headless has very high CPU usage on startup due to yt-dlp not executable
Describe the bug?
The headless randomly starts using up a lot of CPU time with one or several threads being pegged at 100% CPU usage. This will continue even after everyone have left all hosted worlds and even if those worlds are restarted. The only thing that will stop this normally is to do a full restart of the headless software.
So I've been dealing with this particularly issue for a very long time now but have never been able to get a 100% reproducible case because it's been so random. Today I wake up to both of my headlesses using up 100% of my servers CPU time. Restarting them usually resolves this issue but not this time because now it triggers on every single restart using any of my hosted worlds.
I've had this happen to my headlesses for as long as I can remember. Probably up to a year but with no clear trigger as to why. Best guess is someone's avatar triggers some loop in the headless server that just keeps going even when that user leaves the worlds. But now it doesn't even need that it seems and just triggers the issue immediately upon starting the world.
To Reproduce
- Start up a headless using this world: resrec:///G-1TnDablLGdc/R-02ba7821-b618-4b15-9814-620b53cc7ba9
- Observe CPU usage on one or multiple threads being pegged at 100%.
Reproduction Item/World
resrec:///G-1TnDablLGdc/R-02ba7821-b618-4b15-9814-620b53cc7ba9
The above world is a slightly modified version of GearBell's world with the same name. Trying with the original world from GearBell however does NOT trigger the issue.
Expected behavior
Low or no CPU usage when idling with an empty world.
Screenshots
Resonite Version Number
2025.9.12.1173
What Platforms does this occur on?
Linux
What headset if any do you use?
Headless
Log Files
server2 - 2025.9.12.1173 - 2025-09-13 16_29_14.log
The above log is using one of my headless instances loading up my most minimal world which triggers the issue.
Additional Context
No response
Reporters
uruloke
Hello! Here are the results of the automated log parsing:
| Version | OS | CPU | GPU | VRAM | RAM | Headset | Plug-ins/Mods | Renderer | Clean Exit |
|---|---|---|---|---|---|---|---|---|---|
| Beta 2025.9.12.1173 | Ubuntu | AMD Ryzen 9 7940HS w/ Radeon 780M Graphics | Phoenix1 (rev c1) | 14.58 GB | 29.16 GB | no | None | ❌ |
This message has been auto-generated using logscanner.
This is my two headless instances running normally. This one is hosting 1 world:
And my other instance is hosting 4 worlds:
Which results in my poor server having almost all CPU time used by two idling headlesses:
I'm a bit confused by the description in this issue. You say this happens at "startup", but you also say that this continues even after everyone leaves - how are there users present during startup? You mention that it happens randomly too.
Is this happening specifically when the world starts? Or am I misunderstanding something?
- Does it also happen only in that particular world?
- Are there any other observable negative effects? E.g. does the headless become unusable?
- What's the FPS of the headless in that world when this happens?
I'm a bit confused by the description in this issue. You say this happens at "startup", but you also say that this continues even after everyone leaves - how are there users present during startup? You mention that it happens randomly too.
Is this happening specifically when the world starts? Or am I misunderstanding something?
* Does it also happen only in that particular world? * Are there any other observable negative effects? E.g. does the headless become unusable? * What's the FPS of the headless in that world when this happens?
Sorry about the confusion. I've had this issue for a several months. It USED to be that it only happened randomly when there was people in the worlds and that it would then stick around until I restarted the world. Now however it also happens immediately when starting up the headless.
I usually run two headless instances. One instance is hosting one world, the other is hosting four worlds. It has happened to both instances. On the instance that hosts four world I just picked the least complicated world and it happened to trigger on that one too. I haven't checked the other 3 worlds hosted on that instance but seeing as it gets even worse when I host all four worlds I imagine it happens on those worlds too.
The only observable negative effect has been that the world can stutter and lag occasionally. I only tend to notice it because it makes my server ramp up its fans to 100% to try and cool itself.
I'd have to check again but I think the headless instances holds pretty steady at the configured tick rate of 45 unless I have a lot of people in the worlds. I had to reduce the tick rate from 60 to 45 a few months ago when I started running two instances on the same server.
After letting the headless instances running for a while it uses even more CPU. Currently two users total in this instance.
And total server usage is at 100%.
I've noticed that since the headless is pretty much pinning my CPU at 100% CPU most of the time now it lowers the headroom for how many people I can host and how much stuff can happen in the hosted worlds. I now frequently get moments where the headless freezes for a second or so making everyone stop moving/talking for the same amount of time.
If there are any ways for me to diagnose this issue myself that'd be very helpful. At this point I might just try to remote debug my headless using the PDB files available on the headless to see if I can find any clue to what is hogging so much CPU time.
After doing some traces of the headless I've now found the issue with my headless using up 100% CPU time from startup.
So a few updates ago when steamcmd started segfaulting on Linux when trying to update the headless software I switched over to depotdownloader instead. Comparing now it appears that steamcmd applies file permissions 775 (rwxrwxr-x) on all files it downloads. Meanwhile depotdownloader does not as I notice there are some files downloaded that only have 664 (rw-rw-r--). I can only assume that it is based on what is actually on the Steam CDN.
Anyway, because of the above I noticed that the headless was triggering 10s or 100s of exceptions in the yt-dlp interaction code per millisecond with the exception InvalidOperationException "No process is associated with this object." and the Resonite logs showed errors starting the yt-dlp process.
TLDR: depotdownloader didn't set execution permissions on the yt-dlp application and Resonite kept generating 100s of exceptions trying to start it and ensure it was running. The fix was simply adding execution permission to yt-dlp.
I have been noticing something similar. Randomly the headless will start pegging 12-15 threads to 100%, the only fix is to restart the headless, and not just the world.
However i have also not been able to find what triggers this or any errors in the log, so i haven't reported it.
I have encountered this issue as well on a headless server. Every time someone creates a YouTube video player, the CPU usage goes up by 10-20% and never goes back down.
However, after I manually ran chmod +x Headless/RuntimeData/yt-dlp, the issue was fixed for all subsequent videos spawned, like suggested by others here.
As it turns out, steamcmd sets the +x flag on every file when it downloads a game, like the Resonite headless server. This explains why only some people encounter the issue: most people use steamcmd in their headless setup process, so for them yt-dlp is executable out-of-the-box.
Only people using other means of downloading the headless server may encounter this bug. For example, DepotDownloader does not do the same thing! Every downloaded file has -x permissions. And maybe it’s the same for people just using their actual Steam client to download the headless?
So headless owners who use DepotDownloader or the Steam client instead of steamcmd should run chmod -R +x Headless/ to be sure to not encounter this kind of issue.
Interestingly, this issue can also be easily reproduced on the graphical client as well!
chmod -x RuntimeData/yt-dlp- Spawn YouTube video players
- CPU usage goes brrrrr
Mostly unfounded hypothesis time: I believe this may be an edge case of the behavior of Process on Linux, coupled with this bit of hot-wired waiting loop in the NYoutubeDL library