Results 738 comments of meeb

It's likely in the (now generally unsupported and legacy) background worker library used in current tubesync builds, so neither directly tubesync or yt-dlp. yt-dlp does allocate a lot of memory...

@locke4 this isn't going to be anything to do with gunicorn, it's the background workers which are far more complex to replace.

@locke4 Yes, that's the plan. However I should warn you that this is a _significant_ amount of work. General rewrite and refactoring big. Porting the tasks to celery tasks is...

Just to mention here that even with rewriting the worker and tasks system this won't really reduce the peak memory requirement much, it'll just mean it gets freed again after...

Thanks for the effort, @locke4! Specifically for your questions: 1. Ideally, `background_tasks` needs to be completely removed. This is blocking upgrading Django beyond 3.2 as well as nesting tasks is...

I have not, and quite honestly it might be how I've implemented it rather than an upstream issue. Worth checking both ends though! I hacked up the original TubeSync in...

OK, also remember that if you're not tackling the concurrency and race condition issues initially either lock the celery workers to 1 (and disable the ENV var for worker count)...

You should definitely keep your workers at 1. There's no technical reason locally you can't run more workers, however YouTube quickly gets angry at you and blocks your IP or...

And thanks for the log, that just shows the worker using a load of memory for some reason and getting killed, however every bit helps.

You can ignore the message to run `manage.py migrate` - that shows up just because your download path is always different to the one in the default schema and it...