Exclude ``inode``, ``dentry`` and other slabs from ``MEM USAGE``
I have an empty ubuntu image on cgroups v1:
FROM ubuntu:20.04
I run /bin/bash in it and it occupies 1.312MiB according to docker stats.
I then install python (with apt) and do du --inodes -d1 /
docker stats now shows 16.32MiB.
I do not have any process (except bash), so what happened?
I check memory.stat and see
total_cache 0
total_rss 491520
total_rss_huge 0
total_shmem 0
total_mapped_file 0
total_dirty 0
So, my RSS is 480 KB, what occupies 16 megabytes?
I check memory.usage_in_bytes: 17113088
And memory.kmem.usage_in_bytes is 16547840
It seems that container memory is used by kernel.
I check slabs
# sed 1,2d memory.kmem.slabinfo | awk '{RES=$2*$4; print RES"\t"RES/1024/1024" MB\t"$1}' | sort -n -r | head
8251776 7.86951 MB inode_cache
3600576 3.43378 MB dentry
2262832 2.158 MB ovl_inode
1096160 1.04538 MB proc_inode_cache
65536 0.0625 MB kmalloc-4096
65536 0.0625 MB kmalloc-2048
63360 0.0604248 MB sighand_cache
61440 0.0585938 MB task_struct
58624 0.0559082 MB filp
49128 0.0468521 MB shmem_inode_cache
You see: kernel memory is occupied with inode_cache and dentry and overlay inodes.
Lets purge it:
echo 2 > /proc/sys/vm/drop_caches
Much better
# sed 1,2d memory.kmem.slabinfo | awk '{RES=$2*$4; print RES"\t"RES/1024/1024" MB\t"$1}' | sort -n -r | head
161840 0.154343 MB proc_inode_cache
114624 0.109314 MB dentry
65536 0.0625 MB kmalloc-4096
65536 0.0625 MB kmalloc-2048
64448 0.0614624 MB inode_cache
63360 0.0604248 MB sighand_cache
61440 0.0585938 MB task_struct
58624 0.0559082 MB filp
57104 0.0544586 MB ovl_inode
49128 0.0468521 MB shmem_inode_cache
And docker stats now shows 1.863MiB.
Conclusion: If you list some directories and files in container, your memory usage grows.
Docker charges container for slabs (inode cache).
But why? Shouldn't this information be excluded from docker stats MEM USAGE like you do it with page cache?
I know that cache will be purged as needed, but this situation makes docker stats almost useless!
Thanks for reporting; @AkihiroSuda @fredericdalleau @djs55 any ideas on this one?
I know we made some changes in the calculation over time (https://github.com/docker/cli/pull/2415, https://github.com/docker/cli/pull/80 / https://github.com/moby/moby/pull/32777) not sure if this particular one came up though
I'm running into this as well, which led me on a wild goose chase trying to hunt down a memory leak in my application. Currently docker stats is showing my container is using 1008MiB, but the memory usage for the entire host reported by free -m is only 441MiB.
Specifically the problem is with dentry. My application will spool request bodies to /tmp, so I have a large number of short-lived files. dentry will cache information about these files, which is not cleared once the file is deleted, meaning dentry will continue growing until the kernel decides to clear it.
I've found that setting up /tmp as a bind mount prevents dentry from inflating, so that's my current workaround to keeping container memory usage at a useful value.
We have a simple postfix server in a container that suffered from the exact same problems that @luhn encountered. First we noticed ever increasing memory usage in AWS ECS Fargate, suspected a mem leak in our app, quickly figured out there is none and then looked into slab and dentry which turned out to grow and grow without ever being cleaned up.