nerdctl
nerdctl copied to clipboard
The output of `nerdctl stats` disappears after a few minutes
Description
The output of nerdctl stats disappears after a few minutes
Steps to reproduce the issue
nerdctl run -d alpine sleep infinitynerdctl stats- Wait for a few minute
Describe the results you received and expected
Received:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
ce15db2ec247 alpine-ce15d -- -- / -- -- -- -- --
Expected: some stats
What version of nerdctl are you using?
v0.22.0-42-g4c37225
Are you using a variant of nerdctl? (e.g., Rancher Desktop)
No response
Host information
Client:
Namespace: default
Debug Mode: false
Server:
Server Version: v1.6.6
Storage Driver: overlayfs
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Log: fluentd journald json-file
Storage: native overlayfs
Security Options:
apparmor
seccomp
Profile: default
cgroupns
Kernel Version: 5.15.0-41-generic
Operating System: Ubuntu 22.04 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.59GiB
...
@fahedouch PTAL
@AkihiroSuda I tested stats for 15 minutes and didn't get this behavior. I got an empty stats only when I killed the container after this 15 minutes of fetching stats
extract of my logs :
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 99.81% 0B / 0B 0.00% 920B / 962B 0B / 0B 3
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 100.21% 0B / 0B 0.00% 920B / 962B 0B / 0B 3
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 99.40% 0B / 0B 0.00% 920B / 962B 0B / 0B 3
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 100.41% 0B / 0B 0.00% 920B / 962B 0B / 0B 3
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 100.15% 0B / 0B 0.00% 920B / 962B 0B / 0B 3
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 99.85% 0B / 0B 0.00% 920B / 962B 0B / 0B 3
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 98.59% 0B / 0B 0.00% 920B / 962B 0B / 0B 3
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 101.43% 0B / 0B 0.00% 920B / 962B 0B / 0B 3
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 100.11% 0B / 0B 0.00% 920B / 962B 0B / 0B 3
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 99.95% 0B / 0B 0.00% 920B / 962B 0B / 0B 3
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 40.71% 0B / 0B 0.00% 920B / 962B 0B / 0B 2
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 2
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 2
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 2
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 2
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 2
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 2
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 2
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 2
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 2
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 2
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 2
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 2
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 2
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 2
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 2
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 2
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 2
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.09% 0B / 0B 0.00% 920B / 962B 0B / 0B 2
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 1.16% 0B / 0B 0.00% 920B / 962B 0B / 0B 1
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 1
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 1
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 1
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 1
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 1
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 1
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 1
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 1
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 1
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 1
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 1
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 1
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 1
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 1
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 1
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 1
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 1
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 0.00% 0B / 0B 0.00% 920B / 962B 0B / 0B 1
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 -- -- / -- -- -- -- --
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 -- -- / -- -- -- -- --
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 -- -- / -- -- -- -- --
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 -- -- / -- -- -- -- --
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 -- -- / -- -- -- -- --
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
24c458aa71b2 alpine-24c45 -- -- / -- -- -- -- --
Please check if your container is still alive
I can't reproduce too.
@fahedouch @junnplus
I still hit this issue. Could you test it on a cgroup v2 host such as Ubuntu 22.04 ?
Sorry, I retried a few times to make sure it's reproducible.
$ sudo nerdctl info
Client:
Namespace: default
Debug Mode: false
Server:
Server Version: v1.6.4
Storage Driver: overlayfs
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Log: fluentd journald json-file
Storage: native overlayfs stargz
Security Options:
apparmor
seccomp
Profile: default
cgroupns
Kernel Version: 5.15.0-41-generic
Operating System: Ubuntu 22.04 LTS
OSType: linux
Architecture: aarch64
CPUs: 8
Total Memory: 7.74GiB
Name: lima-default
ID: 2eef127c-1a0f-466b-b5eb-364a761abbdf
Looks like I can hit the issue with cgroup v1 mode too
Server:
Server Version: v1.6.6
Storage Driver: overlayfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Log: fluentd journald json-file
Storage: native overlayfs
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.15.0-43-generic
Operating System: Ubuntu 22.04 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.59GiB
I got on cgroup v1:
route ip+net: netlinkrib: too many open files
It's maybe due to the (netns?) FD leak: https://github.com/containerd/nerdctl/blob/4c37225fa31a63e32162f34bdbc26554430e6dc6/cmd/nerdctl/stats.go#L438
And here, when error occurred, should print it to screen? https://github.com/containerd/nerdctl/blob/4c37225fa31a63e32162f34bdbc26554430e6dc6/cmd/nerdctl/stats.go#L474-L478
And this can fix the FD leak:
diff --git a/cmd/nerdctl/stats_linux.go b/cmd/nerdctl/stats_linux.go
index 7279c87..aae77c8 100644
--- a/cmd/nerdctl/stats_linux.go
+++ b/cmd/nerdctl/stats_linux.go
@@ -62,11 +62,13 @@ func setContainerStatsAndRenderStatsEntry(previousStats *statsutil.ContainerStat
if err != nil {
return statsutil.StatsEntry{}, fmt.Errorf("failed to retrieve the statistics in netns %s: %v", ns, err)
}
+ defer ns.Close()
nlHandle, err = netlink.NewHandleAt(ns)
if err != nil {
return statsutil.StatsEntry{}, fmt.Errorf("failed to retrieve the statistics in netns %s: %v", ns, err)
}
+ defer nlHandle.Close()
for _, v := range interfaces {
nlink, err = nlHandle.LinkByIndex(v.Index)
Thank you @liubin , would you mind submitting a PR?
@AkihiroSuda @liubin I am working on PR to fix this
@fahedouch Please go on with your work. I only did some verify locally.
Closed by https://github.com/containerd/nerdctl/pull/1298