uwsgi causing excessive memory consumption
x64 Linux, archlinux, After spinning up the container, and doing nothing else memory consumption shoots up to 8GB
BEFORE
arch@euler:/opt/linkding $ docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
aeb4b9ee7cfa rutorrent-rtorrent-logs-1 0.00% 364KiB / 31.32GiB 0.00% 2.87kB / 0B 0B / 0B 1
d838745fa3f2 rutorrent-rutorrent-1 1.44% 485.8MiB / 31.32GiB 1.51% 13.2MB / 48.6MB 469MB / 422kB 280
9b717c1f26dd rutorrent-geoip-updater-1 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
a31914e3d87b paperless-webserver-1 0.19% 359MiB / 31.32GiB 1.12% 40.8MB / 44.5MB 127MB / 2.36MB 71
2e5c0b2744ac paperless-db-1 0.00% 56.18MiB / 31.32GiB 0.18% 911kB / 657kB 63.9MB / 113MB 9
c01ad3175b9b paperless-broker-1 0.12% 14.52MiB / 31.32GiB 0.05% 43MB / 40.1MB 13.2MB / 2.17MB 5
1cbb34e7201f trilium-trilium-1 0.00% 138.5MiB / 31.32GiB 0.43% 165kB / 1.15MB 48.6MB / 7.22MB 11
AFTER (a few seconds after starting)
arch@euler:/opt/linkding $ docker compose up -d
[+] Running 1/1
✔ Container linkding Started 0.6s
arch@euler:/opt/linkding $ docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
dfb81049f424 linkding 0.02% 8.107GiB / 31.32GiB 25.89% 3.98kB / 0B 0B / 0B 8
aeb4b9ee7cfa rutorrent-rtorrent-logs-1 0.00% 364KiB / 31.32GiB 0.00% 2.87kB / 0B 0B / 0B 1
d838745fa3f2 rutorrent-rutorrent-1 0.08% 488.1MiB / 31.32GiB 1.52% 13.3MB / 49.3MB 471MB / 422kB 280
9b717c1f26dd rutorrent-geoip-updater-1 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
a31914e3d87b paperless-webserver-1 0.20% 359MiB / 31.32GiB 1.12% 40.8MB / 44.6MB 127MB / 2.36MB 71
2e5c0b2744ac paperless-db-1 0.01% 56.18MiB / 31.32GiB 0.18% 911kB / 657kB 63.9MB / 113MB 9
c01ad3175b9b paperless-broker-1 0.13% 14.52MiB / 31.32GiB 0.05% 43MB / 40.1MB 13.2MB / 2.17MB 5
1cbb34e7201f trilium-trilium-1 0.00% 138.5MiB / 31.32GiB 0.43% 165kB / 1.15MB 48.6MB / 7.22MB 11
htop reveals the culprit is uswgi: https://imgur.com/mL1Byhv
here's the docker-compose.yml:
arch@euler:/opt/linkding $ cat docker-compose.yml
version: "3"
services:
linkding:
container_name: "linkding"
image: sissbruecker/linkding:latest
ports:
- "9090:9090"
volumes:
- "/opt/linkding/data:/etc/linkding/data"
env_file:
- ".env"
restart: unless-stopped
the env file is the default one except a LD_CSRF_TRUSTED... option being set
the container has been erased, pruned, etc many times from scratch the same behavior observed.
same setup, on a different machiene (arm) shows no problem
See https://github.com/sissbruecker/linkding/issues/422
how 2 fix it
how 2 fix it
I was having this exact same issue on Fedora Server 38 -- I set up a linkding container and, immediately, it jumped to using ~8.8GB RAM. The work-around outlined below got my memory usage down from ~8.8GB to ~166MB.
So, as MarioNoll commented in #422 :
As a workaround I'm setting a lower number for the maximum number of open file descriptors in my compose file (https://docs.docker.com/compose/compose-file/compose-file-v3/#ulimits), this brings memory consumption down to a normal level.
My "solution" was to use the following in my docker-compose.yml:
version: '3.9'
services:
linkding:
image: sissbruecker/linkding:latest
volumes:
- linkding:/etc/linkding/data
ports:
- 8084:9090
container_name: linkding
restart: unless-stopped
ulimits:
nofile:
soft: 1000 # Soft limit for open file descriptors
hard: 2000 # Hard limit for open file descriptors
volumes:
linkding:
I have no idea if the soft and hard limits that I've set on open file descriptors are unnecessarily high, just about right, or too low, but memory usage on a completely fresh linkding deployment is down from 8.8GB to 166.2MB.
I'll tweak the soft and hard limits if/when I start using the app and have some experience gauging performance with the numbers I've initially somewhat arbitrarily chosen.
@sissbruecker : would love to hear your thoughts if you have a perspective on this solution and/or recommended number ranges on limits for open file descriptors -- for single user deployments, family deployments, work team deployments, work organization deployments?
@986244073 & @morfismo : Hope this helps.
I had already came up with a similar solution, I did
ulimits:
nproc: 65535
nofile:
soft: 20000
hard: 40000
but I really don't know either if those are good limits or not (I was having the exact same problem with a rtorrent container, so I'm baffled on why this is happening and if those numbers helps, I ended up setting up a cronjob to restart the rtorrent as this helped with linkding but did not help with rtorrent)
anyway, thank you @jakejackson
If there isn't a proper code fix for this. Perhaps this should be added to a section "Troubleshooting" in the documentation where this workaround is documented? Thus, we could close this issue after that has been added.
If the docs get updated, maybe the example docker run commands can be updated to include ulimit and/or memory/cpu limits too?
I'm not using compose and had to add the following to my docker run command to prevent uwsgi from getting out of control on an 4x vcpu, 8Gb ram VPS this morning.
--memory=1G \
--cpus=1.5 \
--ulimit nofile=1000:2000 \
Another workaround is to set a limit on file descriptors for uwsgi:
UWSGI_MAX_FD=4096