ytdl-sub icon indicating copy to clipboard operation
ytdl-sub copied to clipboard

Excessive memory usage from ytdl-sub

Open LECOQQ opened this issue 1 year ago • 6 comments

Hey,

I've been playing w/ ytdl-sub for some days now, on my homelab, w/ a proxmox hypervisor on it, running ubuntu lts 24.04 as a vm containerizing ytdl-sub w/ docker compose.

I've noticed that when downloading videos, the RAM usage of my docker container (and thus my node, and thus my proxmox) will go up for each video downloaded. The thing is, it will not stop going up till it overflow from what is physically present in my homelab (32 G). In doing so, it will crash, killing my node and needing me to restart it manually, then restart the docker and the download. I've found a way to circumvent the issue by fixing a limit on my container (16 G for instance). When downloading, it will do the same behavior then at 16G it will not go up and the downloads will keep going. After the downloads, even after killing the docker container, my proxmox node will stick show that i'm currently using the total amount of RAM that I limited earlier (16G+).

It doesn't seems right to me, and this might be a RAM management issue inside the docker. I think that this should be natively taken into account, or atleast documented somewhere, though I don't know if it's a Proxmox issue or not.

Quentin

LECOQQ avatar Jul 19 '24 13:07 LECOQQ

Can you post your config/subscription @LECOQQ ? I know yt-dlp can sometimes eat quite a bit of memory when scraping large channels for the first time

jmbannon avatar Jul 21 '24 04:07 jmbannon

I don't know why but it seems like the Linux version of ytdlp has this nasty memory leak. It certainly doesn't act this way when I'm using windows version of ytdlp. Might be an issue the developers may need to sort on their end...

Yankees4life avatar Aug 18 '24 21:08 Yankees4life

Sorry for the late reply. I've added the docker compose. For the subscription configuration, nothing fancy as I've had troubles dealing w/ it. I've taken the generic conf provided and changed the url to meet a channel I wanted to download. I've tried it on a ubuntu computer and on a ubuntu vm running through a proxmox hypervisor on my homelab. Both have this memory leak that increases after downloading a video.

ytdl-sub-docker-compose.txt

LECOQQ avatar Aug 19 '24 10:08 LECOQQ

Having the same issue (I think) I haven't looked into it all that much, but all I know is since adding the container to my docker VM on proxmox, it keeps crashing my node. Had to remove it for now unfortunately, but I'd love to get this resolved!

mikemilligram avatar Sep 17 '24 11:09 mikemilligram

I think what I'll try to do is chunk downloads in the background, then garbage collect to avoid any yt-dlp mem leaks.

In the meantime, isn't there way to limit containers' memory? I suggest giving ytdl-sub only 4gb RAM max. This person runs ytdl-sub in a container with only 1gb of memory: https://github.com/jmbannon/ytdl-sub/issues/1051

jmbannon avatar Sep 20 '24 15:09 jmbannon

Yes there is indeed a way. It is in the docker compose i've shared earlier. We need to put the lines below into the docker compose file:

deploy:
  resources:
    limits:
      cpus: '4'
      memory: 16g

This is working fine for me after those modifications.

LECOQQ avatar Sep 21 '24 10:09 LECOQQ

For OP's specific use case, I run ytdl-sub in a Proxmox LXC (Debian 12). I have no memory leak problems, it works great other than occasionally having to update the binary.

korpo53 avatar Mar 25 '25 16:03 korpo53

Just a quick message to add thoughts I gathered through debugging other containers/learning about proxmox/understanding the caveats of linux. I've had exactly the same issue with qbittorrent - my ram usage was slowly but surely rising, even when the application was doing it's job, and configured to consume only 4 GB of the RAM of the VM it was on. When the RAM usage was too high (the 32 GB mentioned in my first post), the VM would also crash and I had to restart it, cancelling any tasks I started in qbittorrent (like downloading totally legal linux isos).

After investigating, I realized that the qbittorrent app, either installed as a package, with any container, or with any available versions, would produce the same behavior. I've so switched my approach, and tried to monitor by myself the RAM consumption. There I realized, by using:

watch -n 5 free -m

That the used RAM was..... always below 4 GB ! And when I added tasks to Qbittorrent, then RAM would soar to 4GB, and it wouldn't go further. Though, when watching my proxmox, the RAM consumption was soaring, soaring, soaring, till it hit 32 GB and shut down the VM.

I then tried to do tasks with my qbittorrent, while watching closely others metrics for RAM, such as available RAM, shared, buff/cache RAM. I've come to realize that Linux (and it's totally normal behavior on Linux) is listing the real RAM consumption as used, and will ask for buff/cache RAM (from what I've understand, it's about preventive allocation of pages for future use), even if it has no impact on RAM usage. This behavior implies that buff/cache RAM will rise continually, and will stop once the RAM GB limit has been hit, or that you finish your tasks to free the memory. The buff/cache + used RAM was coincidentally matching perfectly with what was seen on Proxmox. When stopping Qbittorrent, the VM, configured in Proxmox with ballooning RAM, would just free everything, and go on. If I let the app runs, it would just do the typical behavior: hit 32 GB, then kill the Node, etc.

To fix it, I had to put a limit on my RAM size: indeed, my Qbittorrent would use 4 GB, everything else was just paging, so I could reduce the size of my RAM to something like 8 GB. This amount would never, ever, trigger the 32 GB limit, as it could only rise to 8 GB, and everything is now fine. I could block it by adding a limit in the docker container, such as done earlier, or add a limit through Proxmox.

That's when everything clicked with this issue: it does EXACTLY the same behavior. Thus, I would say that there is no memory leak for the application, nor an excessive memory usage, but that it's typical linux RAM behavior, coupled with a Proxmox hypervisor that is not considering used ram, but used+buff/cache RAM.

Sorry for the bad english, I hope it could help some of you. Take care;)

LECOQQ avatar Mar 26 '25 16:03 LECOQQ