mysql icon indicating copy to clipboard operation
mysql copied to clipboard

Really high memory usage

Open hnioche opened this issue 6 years ago • 33 comments

Hi, I'm using docker for a development environment which has a mysql image. On my current computer, running arch linux up to date with the default docker setup (community/docker and community/docker-compose). All the containers I use locally works fine (a ruby container, a nodejs one, a few dotnet core, memcahed).

I only have issues with mysql that uses all the memory available that, each time I start the container, uses immediately all the memory of my computer. Even the most basic use of the docker image with no database uses 16GB.

I've tried the docker library mysql image, version 8 and 5.7, the oracle version, the percona version, they all have the same issue. I've tried mariadb and it works as it's supposed, using 100 something MB. I've tried the same mysql image with podman and had no issue, it uses around 200 MB.

My version of docker is:

Docker version 18.09.8-ce, build 0dd43dd87f

Here's the Dockerfile

FROM mysql:5.7
ENV MYSQL_ROOT_PASSWORD rootpassword
ENV MYSQL_ALLOW_EMPTY_PASSWORD=yes
ENV MYSQL_DATABASE=database

Here's docker stats

CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
d66ccdbd03aa        boring_haibt        0.20%               14.27GiB / 15.51GiB   92.00%              1.34kB / 0B         439MB / 299MB       27

And top inside the container

top - 16:11:06 up  7:32,  0 users,  load average: 0.84, 2.05, 1.42
Tasks:   2 total,   1 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  5.4 us,  3.0 sy,  0.2 ni, 87.1 id,  3.3 wa,  0.9 hi,  0.2 si,  0.0 st
KiB Mem:  16262332 total, 16100992 used,   161340 free,    10252 buffers
KiB Swap:  8388604 total,  6197880 used,  2190724 free.   443820 cached Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
    1 mysql     20   0 17.005g 0.013t      0 S   0.0 85.0   0:05.59 mysqld
  172 root      20   0   24152   2336   2000 R   0.0  0.0   0:00.15 top

I tried to limit the memory usage of the container using -m and mysqld refuses to start when it is bellow 10g with the error:

ERROR: mysqld failed while attempting to check config
command was: "mysqld --verbose --help"

I found something even weirder. On the same machine, I run a Windows virtual machine using qemu-kvm. When this machine is started, the mysql container behave normally. When this machine is not started, it uses all the memory of my computer.

I'm not entirely sure the issue is due to this docker image, but I'm a bit lost and don't know how to troubleshoot this problem further. It seems to be specific to mysql running in a container on docker.

hnioche avatar Jul 19 '19 12:07 hnioche

When this machine [windows vm] is started, the mysql container behave normally. When this machine is not started, it [mysql] uses all the memory of my computer.

That's an interesting issue, and the fact that the process in the container is what's actually using the memory.

Getting a quick look on the normal metrics from a mysql start:

$ docker run -d --rm --name mysql -e MYSQL_ROOT_PASSWORD=root mysql:5.7                    
3f6d13a3418954dfde81727b908e084f5ccd29b57cd1f063bd53a9aac39699e0

After about 30 seconds it settles

CONTAINER ID       NAME     CPU %      MEM USAGE / LIMIT     MEM %    NET I/O             BLOCK I/O           PIDS
3f6d13a34189       mysql    0.15%      183.9MiB / 6.578GiB   2.73%    3.24kB / 0B         98.3kB / 760MB        27

So some other unusual differences are your BLOCK I/O metrics, the input stays high and the output is low

With it being affected by QEMU I'm thinking it might be something with the Docker engine and then something with your environment that's causing an edge case. So I'd file an issue over at https://github.com/moby/moby/issues

Also what if you try using a host-mounted volume for /var/lib/mysql?

wglambert avatar Jul 19 '19 17:07 wglambert

Indeed, you're totally right concerning the IO, I had totally missed this, I'm gonna look this way and try to find what this IO is about.

I think you're right, it's more likely to be due to my environment as I was not able to recreate the same issue on a really similar environment (same os, docker version, amount of memory).

I'll follow your advice and open an issue on moby as soon as I have troubleshoot the IO part. Thanks for your input!

hnioche avatar Jul 22 '19 13:07 hnioche

I tried to look at the IO side, it seems really random and couldn't find anything relevant. I wonder if it can be due to swapping due to the really high memory usage. Also, starting the Windows VM has no impact anymore, I can't think of anything I've changed on this side. I tried mounting /var/lib/data without seeing any impact, either as a volume or a folder from the host. Finally I tried to limit memory at the docker daemon level, without any impact, it is totally discarded.

I'll open an issue on Moby project later today and will link it here before closing this issue

hnioche avatar Jul 24 '19 08:07 hnioche

Just chiming in because I'm experiencing this issue as well. Also on Arch, also exactly the same symptoms that @hnioche described.

Until recently it I could run the container without issues by waiting some time after booting up, but for some reason that no longer works :man_shrugging:

hschne avatar Jul 29 '19 14:07 hschne

I had to jump on some other thing so I didn't pursue investigating this issue further. Just out of curiosity, what's your CPU @hschne ? On my laptop I have a Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz On a desktop setup with Arch as well but with and AMD Zen CPU, I don't have the issue at all.

hnioche avatar Aug 02 '19 17:08 hnioche

Mine is a Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz. Can confirm that I haven't experienced the issue on any of my other devices. Real mystery right here :ghost:

hschne avatar Aug 05 '19 14:08 hschne

Are your other devices on arch too? Do you have any specific docker daemon settings on your i7 computer? Any specific kernel parameter at boot time or using sysctl?

hnioche avatar Aug 05 '19 15:08 hnioche

Yes, the other devices run Arch as well, specifically Antergos. Am not aware of any specific daemon or kernel settings, everything should be vanilla. If you tell me what you are looking for specifically, e.g. which commands to run I can post the output here.

Right now I'm using Podman to run the MySQL container, and that works without any issues :)

hschne avatar Aug 05 '19 15:08 hschne

Hello, I'm experiencing the same issue described here, running on Arch, my CPU is the same Intel Core i7-8565U. Some possibly relevant additional info:

  • I have an NVME SSD
  • root and home are on separate ext4 partitions
  • kernel flags: rootfstype=ext4 add_efi_memmap pci=noaer nouveau.modeset=0 pci=biosirq

evolbug avatar Aug 05 '19 16:08 evolbug

Relevant report on Redhat Bugzilla that was apparently fixed: https://bugzilla.redhat.com/show_bug.cgi?id=1708115

evolbug avatar Aug 06 '19 13:08 evolbug

Relevant report on Redhat Bugzilla that was apparently fixed: https://bugzilla.redhat.com/show_bug.cgi?id=1708115

Oh, good find! The problem seems really close, but the version mentioned in the ticket that supposedly fixes the issue is older than the one I've tried. I wonder if it's in moby itself or in the integration in the distribution.

hnioche avatar Aug 07 '19 13:08 hnioche

Thank you so much @evolbug! The bug you found put me on the right tracks. The issue is caused by the ulimit nofile. By default, on arch, the value is too low I guess. running the container with --ulimit nofile=262144:262144 solve the issue and mysql behaves normally. I guess the best way to fix this would be to set this option by default.

Should this be documented on docker images prone to this issue?

Edit: Actually, it's weird, I think the default limit is way to high, this is what I get by default:

> docker run --rm centos:7 /bin/bash -c 'ulimit -Hn && ulimit -Sn'
1073741816
1073741816

So lowering the value fixes the issue

hnioche avatar Aug 08 '19 12:08 hnioche

running the container with --ulimit nofile=262144:262144 solve the issue and mysql behaves normally.

You're a god. Thanks to @evolbug as well!

I can confirm that this fixes the issue :ok_hand:

hschne avatar Aug 08 '19 12:08 hschne

I agree as well that this should be set by default on containers exhibiting this behaviour, as it's quite difficult to track down and could happen across docker versions, as seen in the redhat report

evolbug avatar Aug 09 '19 13:08 evolbug

The Docker from pacman uses LimitNOFILE=1048576 in the docker.service which is also the default on my Ubuntu install and matches the host's ulimit. So is something changing your guys' host/docker ulimit to be 1073741816?

Curiously the ulimit value of 1073741816 is 1023.999*1048576. Not exactly 1024 but close. It seems when it's set to infinity then that's the value it sets https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=920913

Looks like Fedora has something related to that exact ulimit size https://bugzilla.redhat.com/show_bug.cgi?id=1715254

the default ulimit value (1073741816 on my Fedora 30)

And an Ubuntu issue was filed that might be relevant? https://www.mail-archive.com/[email protected]/msg5628533.html

due to PermissionsStartOnly=true, systemd runs ExecStartPre commands with insane limits

wglambert avatar Aug 09 '19 17:08 wglambert

Running ulimit -Hn && ulimit -Sn on my host (pure Arch) shows

evol~ > ulimit -Hn && ulimit -Sn
524288
1024

Which appears to be half of service's limit, could it be this mismatch?

Edit: another bit of odd behaviour:

evol~ > docker run ubuntu /bin/bash -c ulimit -Hn
unlimited
evol~ > docker run ubuntu /bin/bash -c ulimit -Sn
unlimited
evol~ > docker run ubuntu /bin/bash -c 'ulimit -Hn && ulimit -Sn'
1073741816
1073741816

evolbug avatar Aug 09 '19 17:08 evolbug

What's your cat /proc/$(pgrep dockerd)/limits?

$ cat /proc/$(pgrep dockerd)/limits
Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            8388608              unlimited            bytes     
Max core file size        unlimited            unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             unlimited            unlimited            processes 
Max open files            1048576              1048576              files     
Max locked memory         65536                65536                bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       46429                46429                signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us

It could be your systemd/init file that's changing the ulimit

Also your docker run line is being interpreted by the host's shell when not in single quotes Example:

$ /bin/bash -c ulimit -Hn
unlimited

wglambert avatar Aug 09 '19 17:08 wglambert

I see, here it is

evol~ > cat /proc/$(pgrep dockerd)/limits
Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            8388608              unlimited            bytes     
Max core file size        unlimited            unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             unlimited            unlimited            processes 
Max open files            1048576              1048576              files     
Max locked memory         65536                65536                bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       63448                63448                signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us        

evolbug avatar Aug 09 '19 17:08 evolbug

On this note, I had tried to reduce the number of opened files by adding this in my systemd docker service override:

LimitNOFILE=49152

And it seems to be applied properly

➜  ~ cat /proc/$(pgrep dockerd)/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        unlimited            unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             unlimited            unlimited            processes
Max open files            49152                49152                files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       63209                63209                signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us

But it had no impact on the mysql container ulimits

hnioche avatar Aug 09 '19 17:08 hnioche

I'm encountering memory issues too when running within a docker container. I found the following info suggesting that docker is not releasing threads to mysql. [Info] (https://stackoverflow.com/questions/52439641/mysql-in-docker-container-never-releases-memory-and-regularly-crashes-cause-out)

Quoting Wilson's comment:

"You have already identified your problem. Docker never RELEASES memory. SELECT @@threads_connected; will confirm your suspicion. When you know everything has completed, threads_connected should reduce when the Docker software asks for CLOSE() and release the resources. I suspect someone missed the CLOSE() or equivalent request in the docker software. Suggestions for MySQL configuration are on the way today. Wilson"

baj1210 avatar Dec 01 '19 05:12 baj1210

@baj1210, that quote does not seem correct. Docker does not intercept memory allocations. That is provided by whatever libc is in the image (and any container limits are enforced by the kernel, not dockerd).

If you are talking about a bug in docker-userland-proxy that it doesn't close the connection, then that should be reported to https://github.com/docker/libnetwork

yosifkit avatar Dec 03 '19 01:12 yosifkit

Is it normal that Mysql(8.0.23) in Docker consumes so much memory? I have about 20 other containers and MYSQL is the one that consumes the most, how is that possible, what is the solution? descarga

I am WSL2 with Ubuntu 18

chiqui3d avatar Apr 18 '21 12:04 chiqui3d

The same project with Mariadb's latest image consumes 80 MB, compared to 340 MB it is much better, but I still think it's a huge amount.

chiqui3d avatar Apr 18 '21 18:04 chiqui3d

Some problem here. Mysql 8.0.25 don't dispache memory on docker and consume 500Mg of memory all time of my virtual machine.

Piemontez avatar Jun 22 '21 20:06 Piemontez

Some problem here. Mysql 8.0.25 don't dispache memory on docker and consume 500Mg of memory all time of my virtual machine.

Faced the same problem. Kernel 5.13.5-arch1-1. mysql 8 consumes a minimum of memory, but mysql 5 - almost all existing

fenKss avatar Jul 27 '21 13:07 fenKss

same problem on a t4g ec2 instance with Mysql 8.0.23 container

giammin avatar Dec 07 '21 15:12 giammin

mysqld 8.0.13 on Docker version 20.10.8 on Fedora 34 causes same problem. After updating the image to mysqld 8.0.28, I don't see any problem so far. still consumes 372.6MiB, but it is reasonable to official document.

karakani avatar Jan 21 '22 06:01 karakani

I am having the same issues with MySQL:5.7.12, 5.7.13 and 5.7.37, it takes memory until crashes, around 10 seconds. I tried later with version 8.0.28 and works (it uses around 600 MiB) but works perfectly.

I am using Fedora 35, the same project and the same MySQL version on MacBooks, works fine.

SirMartin avatar Mar 21 '22 13:03 SirMartin

Same problem with MySQL 5.7.37 and Docker 20.10.13 using Fedora 35

vodkhard avatar Mar 21 '22 15:03 vodkhard

Same problem with MySQL 5.7.37 and Docker 20.10.13 using Fedora 35

If you add to your docker-compose the ulimits can work

ulimits: nproc: 65535 nofile: soft: 20000 hard: 40000

SirMartin avatar Mar 21 '22 15:03 SirMartin