moby
moby copied to clipboard
``docker stats CONTAINER`` reports zero memory usage
This is the line I have after launching a sample docker stats:
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O
apache 0.54% 0 B / 4.158 GB 0.00% 14.36 MB / 14.89 MB 2.998 MB / 0 B
extranet 0.38% 0 B / 4.158 GB 0.00% 321.2 MB / 759.6 MB 10.26 MB / 0 B
postgres 0.00% 0 B / 4.158 GB 0.00% 406.3 MB / 197.1 MB 4.612 MB / 963.5 MB
docker version and docker info outputs are at the end of this report if needed. But before, let me add that:
- on same docker version (1.9.1) but different host, it works fine
- I have clear warning about no support of memory limit at the end of output of
docker info. - I have ran check-config from https://github.com/docker/docker/raw/master/contrib/check-config.sh
and these are missing compared to the working host:
- CONFIG_MEMCG_KMEM: missing
- CONFIG_MEMCG_SWAP_ENABLED: missing
- CONFIG_CGROUP_HUGETLB: missing
- CONFIG_CFS_BANDWIDTH: missing
- CONFIG_RT_GROUP_SCHED: missing
I've removed other missing 'Optional features' that were explicitely not related (EXT3 feature for example).
One of these missing config value could be the culprit ? If yes, why is it dubbed optional ? can we have a clear warning when launching docker stats then ?
Here are general information about the failing system:
$ uname -a
Linux CAR-PRD-21 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt20-1 (2015-11-19) x86_64 GNU/Linux
$ cat /etc/debian_version
8.2
$ docker version
Client:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5
Built: Fri Nov 20 12:59:02 UTC 2015
OS/Arch: linux/amd64
Server:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5
Built: Fri Nov 20 12:59:02 UTC 2015
OS/Arch: linux/amd64
$ docker info
Containers: 8
Images: 134
Server Version: 1.9.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 150
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 8 (jessie)
CPUs: 2
Total Memory: 3.873 GiB
Name: CAR-PRD-21
ID: C337:SIOE:AUZZ:AOPV:7BZ2:XS6O:UYJH:23TY:7M7R:LUZV:M5RO:6MRY
WARNING: No memory limit support
WARNING: No swap limit support
The system is running in VMWare (client and docker host).
Many thanks,
Thanks for an excellent bug report, and doing the research!
Good question, I know memory-limit is affected by the availability of these options, but not sure if they (should) affect reading memory use here.
ping @crosbymichael perhaps you know if this is expected behavior if these options are missing, or is this a bug?
I also get this issue, on my local debian box which currently runs docker 1.9.0. No problem on my Amazon Linux AMI instance (docker 1.8.2) which does not show the following docker info lines, while the debian one does;
WARNING: No memory limit support
WARNING: No swap limit support
Same issue. I looked on the host and noted that the memory stats are not showing in /sys/fs/cgroup/memory.
ls /sys/fs/cgroup/
blkio
cpu
cpuacct
cpu,cpuacct
cpuset
devices
freezer
net_cls
net_cls,net_prio
net_prio
perf_event
Docker Info:
Server Version: 1.9.1 Storage Driver: aufs Root Dir: /var/lib/docker/aufs Backing Filesystem: extfs Dirs: 391 Dirperm1 Supported: true Execution Driver: native-0.2 Logging Driver: json-file Kernel Version: 3.16.0-4-amd64 Operating System: Debian GNU/Linux 8 (jessie) CPUs: 1 Total Memory: 494.5 MiB Name: swarm-node-1 Registry: https://index.docker.io/v1/ WARNING: No memory limit support WARNING: No swap limit support Labels: provider=digitalocean
Yes, it should effect reading memory usage since cgroup memory accounting is not going to work when missing.
Not sure what to do here as printing a warning isn't really going to work since the stats take up the screen... maybe instead of 0/0 we do something like unsupported
In my case, there is a separate issue documenting the enabling of the memory controller for viewing memory stats: https://github.com/docker/docker/issues/251.
@cpuguy83 Documenting the zero stats may indicate that the proper cgroup manager is likely disabled, and how to manually verify via checking for the directory would be helpful. Probably here: https://docs.docker.com/engine/reference/commandline/stats/ or https://docs.docker.com/engine/articles/runmetrics/. While the article explains the connection to cgroups, it does not specify that docker will print zero if the statistic is not present.
Excuse my ignorance, but is the lack of support due to software or hardware?
Perhaps the memory usage, and the memory % columns just not be present when they are unsupported considering that without support the values are inaccurate, and, the admin may have deliberately chosen not to support that feature for whatever reason.
In my situation, docker stats sometimes reports 0 memory usage for a container and sometimes not, even when the container's state is "not changing" (it is not being restarted or servicing any requests). I see this on Docker 1.8.2 and 1.9.1 on RHEL 7.2.
If I do while true; do docker stats --no-stream $(docker ps -q) | tail -n +2 | awk '{print $3}'; sleep 1; done, I should see no zeroes, but sometimes stats will return 0 for all the containers, sometimes for just a few, more often for none of the containers. There happen to be twenty-five containers running -- a couple PostgreSQL containers and the rest some custom web services.
I have the same issue using Docker 1.10.3 on Debian 3.16.0-4-amd64 x86_64 GNU/Linux. I am also receiving a warning that my kernal does not have correct modules when I try setting memory limit for my containers.
@marc0der I have exactly the same problem with the same kernel and docker version.
In my case I'm running docker over a Xen VM, and I really thought it was that until I saw that all you have the same problem!
It seems like everybody reporting the problem here has the default Debian Jessie install.
Possible fix: http://awhitehatter.me/debian-jessie-wdocker/
I tried it on my instance on Digital Ocean and it worked like a charm! Thanks! On Tue, 5 Apr 2016 at 17:39, Petr Pridal [email protected] wrote:
It seems like everybody reporting the problem here has the default Debian Jessie.
Possible fix: http://awhitehatter.me/debian-jessie-wdocker/
— You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub https://github.com/docker/docker/issues/18420#issuecomment-205886816
I'm on Ubuntu 14.04 with 4.2.0-18-generic kernel and have mem/swap accounting enabled (docker info doesn't complain). Similar to @murphyke case some containers show statistics, some do not (zeroes only). Those that show have barely correct values, such as 1.5 or 2 Mb, some even few Kbs. I know for sure that it can't be true. What can I do to help troubleshoot this issue?
I tried it on my instance on Digital Ocean and it worked like a charm! Thanks!
yes, it works for me too (Debian Jessie)
@klokan Thanks, this fixes also the deal on my VMWare with debian Jessie.
TL;DR:
Your kernel is probably not configured not to have cgroup activated by default. To turn it on, you need a simple modification to the GRUB_CMDLINE_LINUX_DEFAULT line in /etc/default/grub with the following:
GRUB_CMDLINE_LINUX_DEFAULT="quiet cgroup_enable=memory swapaccount=1"
Then run sudo update-grub and reboot.
This should bring back memory stats. More info: http://awhitehatter.me/debian-jessie-wdocker/
This should be definitively part of check-config script.
@vaab are you interested in opening a PR to add that?
In my situation, docker stats sometimes reports 0 memory usage for a container and sometimes not, even when the container's state is "not changing" (it is not being restarted or servicing any requests). I see this on Docker 1.8.2 and 1.9.1 on RHEL 7.2.
I have the same issue with Docker 1.7.1 on CentOS 6.7. In most of the cases docker stats --no-stream returns the correct values but randomly I end up with seeing only 0 values. I guess that problem is not related to the one happening on Debian, but maybe someone knows what might be going on as well.
I'm seeing something like this too, some of the containers in docker stats have only zero values. I'm able to reproduce this with containers that have been placed in the network of another existing container, so started like this run -itd --net=container:<other_container_id> ubuntu. This happens in both 1.11 and 1.10 but it works fine in 1.9.
Docker 1.11.2 with AWS AMI Debian 8 HVM.
#docker stats
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
ea3cac3840b4 0.52% 0 B / 0 B 0.00% 46.62 MB / 50.41 MB 610.3 kB / 0 B 0
bc35945da4d5 0.00% 0 B / 0 B 0.00% 397.4 MB / 391.4 MB 0 B / 0 B 0
1fd5dd5a0002 0.52% 0 B / 0 B 0.00% 139.4 MB / 390 MB 925.7 kB / 4.096 kB 0
2efcca0bcd7a 0.49% 0 B / 0 B 0.00% 191.1 MB / 179 MB 126.1 MB / 647.2 kB 0
b8b82b2870af 0.38% 0 B / 0 B 0.00% 635.6 MB / 303.6 MB 170.4 MB / 181.6 MB 0
3d364013ca95 0.06% 0 B / 0 B 0.00% 1.65 GB / 1.428 GB 127.5 MB / 38.38 MB 0
b8877bf11a52 0.29% 0 B / 0 B 0.00% 127.8 MB / 80.48 MB 11.06 MB / 0 B 0
c596f23b154d 0.23% 0 B / 0 B 0.00% 788.4 MB / 117.2 MB 12.98 MB / 0 B 0
1ee457e673c0 0.88% 0 B / 0 B 0.00% 0 B / 0 B 30.72 MB / 0 B 0
#uname -r
3.16.0-4-amd64
#uname -a
Linux ip-172-31-7-9 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2+deb8u3 (2016-07-02) x86_64 GNU/Linux
Worked now with the comment suggestion: https://github.com/docker/docker/issues/396#issuecomment-179470044
before:
GRUB_CMDLINE_LINUX_DEFAULT="init=/bin/systemd console=hvc0 console=ttyS0"
after adding (cgroup_enable=memory swapaccount=1) in /etc/default/grub :
GRUB_CMDLINE_LINUX_DEFAULT="init=/bin/systemd console=hvc0 console=ttyS0 cgroup_enable=memory swapaccount=1"
#update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.16.0-4-amd64
Found initrd image: /boot/initrd.img-3.16.0-4-amd64
done
And restart the OS.
I get now:
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
ea3cac3840b4 0.38% 883.6 MB / 7.861 GB 11.24% 775.1 kB / 2.321 MB 135.6 MB / 0 B 0
bc35945da4d5 0.00% 53.17 MB / 7.861 GB 0.68% 8.649 kB / 2.704 kB 6.787 MB / 0 B 0
1fd5dd5a0002 1.60% 1.057 GB / 7.861 GB 13.45% 733.9 kB / 2.348 MB 238.8 MB / 0 B 0
2efcca0bcd7a 4.89% 1.343 GB / 7.861 GB 17.09% 7.544 MB / 4.918 MB 156.8 MB / 647.2 kB 0
b8b82b2870af 0.66% 929.2 MB / 7.861 GB 11.82% 120.6 kB / 1.199 MB 140.7 MB / 15.12 MB 0
3d364013ca95 0.09% 96.4 MB / 7.861 GB 1.23% 512.6 kB / 667.3 kB 7.389 MB / 0 B 0
b8877bf11a52 0.05% 17.34 MB / 7.861 GB 0.22% 1.055 MB / 648.9 kB 10.53 MB / 0 B 0
c596f23b154d 0.14% 119.7 MB / 7.861 GB 1.52% 18 MB / 1.187 MB 31.24 MB / 0 B 0
1ee457e673c0 1.05% 202.5 MB / 7.861 GB 2.58% 0 B / 0 B 105.2 MB / 0 B 0
:smile: :confetti_ball: :tada: I leave here this tip to help those in need.
We do experience the same issue - some of the containers are reported with zero MEM usage which is not true as I can see networks traffic changing and processes running.
docker version:
Client:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 20f81dd
Built: Thu Mar 10 21:49:11 2016
OS/Arch: linux/amd64
Server:
Version: 1.10.3
API version: 1.22
Go version: go1.5.4
Git commit: 1f8f545
Built:
OS/Arch: linux/amd64
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O
0726506bbb33 11.77% 0 B / 1.074 GB 0.00% 33.56 MB / 31.87 MB 430.8 MB / 27.62 MB
0d3a0f0e70a0 0.65% 828.1 MB / 1.074 GB 77.12% 919 MB / 448.5 MB 529.9 MB / 73.88 MB
20b050601673 0.00% 17.28 MB / 1.074 GB 1.61% 8.057 GB / 14.47 GB 197.8 MB / 0 B
25771aa452e8 0.12% 285.7 MB / 1.074 GB 26.61% 146.1 MB / 82.12 MB 159 MB / 0 B
273a1be43f1b 3.83% 299.1 MB / 1.074 GB 27.85% 30.1 GB / 8.491 GB 135.8 MB / 0 B
437bdd817453 36.50% 874.9 MB / 2.147 GB 40.74% 133.3 GB / 205.1 GB 968.4 MB / 98.3 kB
4d5e3a05276e 0.00% 358.8 MB / 1.074 GB 33.41% 37.35 GB / 19.76 GB 130.6 MB / 0 B
4fbd3bbbef0e 466.45% 0 B / 3.146 GB 0.00% 555.4 MB / 147.8 MB 351.3 MB / 0 B
525080b9612e 0.00% 486.5 MB / 1.074 GB 45.31% 104.2 GB / 34.41 GB 997.6 MB / 729.9 MB
7a4780b476dd 0.00% 82.06 MB / 1.074 GB 7.64% 515.4 MB / 179 MB 103.2 MB / 0 B
7e6ae2f9024c 0.00% 8.192 kB / 1.074 GB 0.00% 55.94 GB / 36.59 GB 0 B / 0 B
80c784784e34 0.05% 388.5 MB / 1.074 GB 36.19% 242.3 GB / 79.02 GB 12.01 GB / 5.765 GB
a3311fafbd0a 0.00% 99.3 MB / 1.074 GB 9.25% 56.24 GB / 41.78 GB 109.4 MB / 0 B
a6c55c5f90ff 0.10% 1.043 GB / 1.074 GB 97.09% 3.597 GB / 6.327 GB 76.44 GB / 3.243 GB
a88faae86ead 0.02% 504.6 MB / 2.097 GB 24.06% 9.07 GB / 11.36 GB 616.4 MB / 0 B
aa6cd7e139db 8.23% 697.2 MB / 1.074 GB 64.93% 2.864 GB / 1.339 GB 420.6 MB / 36.86 kB
b9a8c33133ed 0.00% 50.35 MB / 1.074 GB 4.69% 117.1 GB / 48.74 GB 273.4 MB / 16.38 kB
c4833e9726ba 0.00% 61.37 MB / 536.9 MB 11.43% 0 B / 0 B 83.32 MB / 0 B
cbf909fb73f0 0.00% 0 B / 1.074 GB 0.00% 56.25 GB / 89.83 GB 0 B / 0 B
cea1ba32ea33 0.00% 68.92 MB / 268.4 MB 25.67% 35.97 GB / 22.73 GB 53.61 MB / 839.7 kB
d603e03e154a 12.10% 672.2 MB / 1.074 GB 62.60% 312.4 GB / 272 GB 362.2 MB / 3.375 MB
e171b8d6c6d3 0.06% 196.5 MB / 1.074 GB 18.30% 2.062 GB / 1.575 GB 163.7 MB / 0 B
e470b4bee0dc 0.00% 470 MB / 1.074 GB 43.77% 66.14 GB / 35.06 GB 321.4 MB / 0 B
e9c567f5e589 0.00% 4.338 MB / 1.074 GB 0.40% 606.2 MB / 946.9 MB 1.368 MB / 0 B
ecf1a2bd7626 0.06% 173.6 MB / 1.074 GB 16.17% 1.615 GB / 2.721 GB 197.3 MB / 0 B
f550a7798fe7 11.71% 1.039 GB / 1.074 GB 96.72% 4.978 GB / 18.62 GB 1.472 TB / 365.1 GB
fcbb64476eb9 0.00% 0 B / 1.074 GB 0.00% 71.09 GB / 192.9 GB 0 B / 0 B
but I can see memory stats for the 0726506bbb33 container above reported with 0 MEM usage in memory stats file:
cat /sys/fs/cgroup/memory/system.slice/docker-0726506bbb33fb231a57e35eb6975025ff4fb17016fff8340fb731530289ca52.scope/memory.stat
cache 122699776
rss 570142720
rss_huge 4194304
mapped_file 31211520
dirty 4096
writeback 0
swap 4820992
pgpgin 1040830
pgpgout 872701
pgfault 998630
pgmajfault 46
inactive_anon 273182720
active_anon 296960000
inactive_file 55881728
active_file 66818048
unevictable 0
hierarchical_memory_limit 1073741824
hierarchical_memsw_limit 9223372036854771712
total_cache 122699776
total_rss 570142720
total_rss_huge 4194304
total_mapped_file 31211520
total_dirty 4096
total_writeback 0
total_swap 4820992
total_pgpgin 1040830
total_pgpgout 872701
total_pgfault 998630
total_pgmajfault 46
total_inactive_anon 273182720
total_active_anon 296960000
total_inactive_file 55881728
total_active_file 66818048
total_unevictable 0
Any ideas why this happens ?
Hi,
I have the same issue once in a while:
CONTAINER ID CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
054706df1bdb 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
f240bc487574 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
0cf60bf2a988 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
1ee772dac540 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
a4d715c8b123 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
015d2f21ca7a 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
2d18be71c885 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
0199b0bfaf0b 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
407f3fcc98ea 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
dfd46a333851 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
e87a83f60b3a 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
Env:
Ubuntu 16.04.3 LTS
Linux zzzz 4.4.0-101-generic #124-Ubuntu SMP Fri Nov 10 18:29:59 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Docker version 17.11.0-ce, build 1caf76c
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash cgroup_enable=memory swapaccount=1"
Hi, I followed the steps mentioned to resolve the issue but I still can't get the desired result. I followed the steps as here to install docker on Ubuntu 20.04 https://docs.docker.com/engine/install/ubuntu/ I didn't have the grub file present already in /etc/default, therefore I manually created one and added the line:
GRUB_CMDLINE_LINUX_DEFAULT="init=/bin/systemd console=hvc0 console=ttyS0 cgroup_enable=memory swapaccount=1"
Any idea on how to diagnose/fix this issue?
If anyone is looking for this issue on RaspberryPi OS, you will have to edit /boot/cmdline.txt and add cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory to get the memory stats.
If anyone is looking for this issue on RaspberryPi OS, you will have to edit
/boot/cmdline.txtand addcgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memoryto get the memory stats.
I am using Ubuntu 20.04(LTS) on Raspberry pi4. In my case, the 'cmdline.txt' file was present in '/boot/firmware/' folder. I followed the solution mentioned above and rebooted the pi. After that docker stats started showing the memory usage for the running containers.
If anyone is looking for this issue on RaspberryPi OS, you will have to edit
/boot/cmdline.txtand addcgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memoryto get the memory stats.
Thank you for sharing this. It works on Raspberry Pi 4
Hello, I'm having the same issue... But unfortunately, the GRUB_CMDLINE_LINUX_DEFAULT setting did not fix it. I am running on Ubuntu 20.04 server, not on raspberry pi. The flag is active, as shown by cat /proc/cmdline:
BOOT_IMAGE=/vmlinuz-5.4.0-81-generic root=UUID=******* ro net.ifnames=0 biosdevname=0 nomodeset cgroup_enable=memory swapaccount=1
Does anyone have a suggestion? Maybe things have changed since 2016. Thanks!
If anyone is looking for this issue on RaspberryPi OS, you will have to edit
/boot/cmdline.txtand addcgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memoryto get the memory stats.Thank you for sharing this. It works on Raspberry Pi 4
I'm also on a Raspberry Pi 4 and this did not solve the problem for me. 😢 I still show 0 memory usage and 0 memory limit for every container.
Thank you for sharing this. It works on Raspberry Pi 4
I'm also on a Raspberry Pi 4 and this did not solve the problem for me. 😢 I still show 0 memory usage and 0 memory limit for every container.
What OS are you running? My comment was specific to RaspberryPi OS.
Thank you for sharing this. It works on Raspberry Pi 4
I'm also on a Raspberry Pi 4 and this did not solve the problem for me. 😢 I still show 0 memory usage and 0 memory limit for every container.
What OS are you running? My comment was specific to RaspberryPi OS.
Found the problem -- an unintended line break that looked like a word-wrap in the cmdline.txt file. Got rid of the line break and all's working correctly.
i have the same error
docker stats:
docker version:
Version: 20.10.14
API version: 1.41
Go version: go1.16.15
Git commit: a224086
Built: Thu Mar 24 01:48:02 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.14
API version: 1.41 (minimum version 1.12)
Go version: go1.16.15
Git commit: 87a90dc
Built: Thu Mar 24 01:45:53 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.5.11
GitCommit: 3df54a852345ae127d1fa3092b95168e4a88e2f8
runc:
Version: 1.0.3
GitCommit: v1.0.3-0-gf46b6ba
docker-init:
Version: 0.19.0
GitCommit: de40ad0
docker info
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Docker Buildx (Docker Inc., v0.8.1-docker)
scan: Docker Scan (Docker Inc., v0.17.0)
Server:
Containers: 9
Running: 1
Paused: 0
Stopped: 8
Images: 31
Server Version: 20.10.14
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 3df54a852345ae127d1fa3092b95168e4a88e2f8
runc version: v1.0.3-0-gf46b6ba
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.17.1-051701-generic
Operating System: Ubuntu 20.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 36
Total Memory: 31.25GiB
Name: xeon
ID: SKJW:OYWJ:QIKA:YH6S:T5DC:UNAC:LD5A:WEME:V7VX:B5ET:UPGQ:SBTT
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
uname -r
5.17.1-051701-generic
GRUB File /etc/default/grub
GRUB_DEFAULT=0
GRUB_TIMEOUT_STYLE=hidden
GRUB_TIMEOUT=0
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet cpuset cgroup_enable=memory cgroup_memory=1 splash swapaccount=1"
GRUB_CMDLINE_LINUX="cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 swapaccount=1"
i think this happen after i upgrade the kernel version