aws-efs-csi-driver
aws-efs-csi-driver copied to clipboard
Potential Memory Leak?
Hi all, We're running version 1.2 of the EFS driver but over time the pods consistently chew up memory.
The above graph is over the past 7 days and as you can see each node pod is now using more than 1GB of memory. Needless to say this is a reasonable chunk of the entire nodes memory so we're now looking for solutions. Any ideas? Thanks!
any chance you can exec into the pod and print running processes? It is expected that memory usage be o(# of mounts) because each mount needs its own stunnel process. But they are supposed to go away upon unmount.
Hi,
We are seeing similar memory usage from the EFS csi pods too (version 1.2.0)
I tried to list processes from inside the container, but there is no ps or top commands available.
Here is the view from the node prespective, filtering to see all processes under the efs-csi process id:
Seems that the longer the stunnel process runs, the more memory it uses.
There is an issue raised on the efs-utils github that seems relevant - https://github.com/aws/efs-utils/issues/99 In both cases stunnel version is 4.56
it sounds from the efs-utils issue we can mitigate the leak by installing a newer stunnel, we could build it as part of the efs image build for the next release @kbasv
Looks like same problem to me. In aws/efs-utils#99 - there RSS grow was about 1.6 MB/day, here it is 1.3 MB/day (youngest v.s. oldest process) - not exactly same, but very close.
Memory leak issue could be related to livenessprobe container version < v2.2.0-eks-1-18-2
In my case (version 1.2.0 of the EFS CSI), the liveness probe image tag is v2.2.0-eks-1-18-2 .
here is a chart from Grafana showing the memory usage of one the pods split by the containers:
@wongma7 will this fix be a part of v1.3.3? That'll probably take some time right?
We are seeing this too - we are using version v1.3.4. And can confirm that it is efs-plugin
container causing the memory usage increase.
Here's a Grafana graph showing memory usage of efs-csi-node
pods over the previous 7 days:
And here is a graph showing that it is the efs-plugin
container causing this increase:
We noticed this issue too earlier this week, i can confirm that upgrading stunnel solves it.
If someone wants to roll their own image just as us until there is a official fix;
FROM 602401143452.dkr.ecr.eu-north-1.amazonaws.com/eks/aws-efs-csi-driver:v1.3.4
RUN yum install -y gcc openssl-devel tcp_wrappers-devel
RUN curl -o stunnel-5.61.tar.gz https://www.stunnel.org/downloads/stunnel-5.61.tar.gz && \
tar -zxvf stunnel-5.61.tar.gz && \
cd stunnel-5.61 && \
./configure --prefix=/usr && \
make && \
rm /usr/bin/stunnel && \
make install && \
rm -f /stunnel-5.61.tar.gz && \
rm -rf /stunnel-5.61
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
Any chance of an update on this? We are still seeing this even in v1.4.0.
Bump. We are also seeing this in production running v1.3.8. Some efs-plugin containers are consuming >800M each.
Just in case someone needs a updated info for @blodan tip (thanks a lot!), here is the Dockerfile that I am running in prod:
# Extend original image
FROM amazon/aws-efs-csi-driver:v1.4.0
# References
# https://docs.aws.amazon.com/efs/latest/ug/upgrading-stunnel.html
# https://github.com/kubernetes-sigs/aws-efs-csi-driver/issues/474
# https://github.com/aws/efs-utils/issues/99
# NOTE: Do not run yum update or similar, we need to make sure we stay similar to the original release image
# Install build deps and try to reduce image size
RUN yum install -y gcc.x86_64 0:7.3.1-15.amzn2 openssl-devel.x86_64 1:1.0.2k-24.amzn2.0.4 tcp_wrappers-devel.x86_64 0:7.6-77.amzn2.0.2 && yum clean all && rm -rf /var/cache/yum
# Get and build the latest stable (5.65 as of 17/08/2022)
RUN curl -o stunnel-5.65.tar.gz https://www.stunnel.org/downloads/stunnel-5.65.tar.gz && \
tar -zxvf stunnel-5.65.tar.gz && \
cd stunnel-5.65 && \
./configure --prefix=/usr && \
make && \
rm /usr/bin/stunnel && \
make install && \
rm -f /stunnel-5.65.tar.gz && \
rm -rf /stunnel-5.65 && \
stunnel -version
# Lib version as of 17/08/2022
# gcc x86_64 7.3.1-15.amzn2 amzn2-core 22 M
# openssl-devel x86_64 1:1.0.2k-24.amzn2.0.4 amzn2-core 1.5 M
# tcp_wrappers-devel x86_64 7.6-77.amzn2.0.2 amzn2-core 17 k
In big clusters I am seeing ~200mb usage per node, which is a lot better than a few GBs as is with the default image.
EDIT: For status check https://github.com/aws/efs-utils/issues/99#issuecomment-1217370114.
This issue may now be resolved with the recently-released aws-efs-csi-driver v1.4.6, which uses stunnel5 via efs-utils v1.34.2.
Closing the issue as latest efs-csi-driver which is using stunnel5 via efs-utils is able to solve the Memory leak.
@mskanth972: You can't close an active issue/PR unless you authored it or you are a collaborator.
In response to this:
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.