bottlerocket
bottlerocket copied to clipboard
emptyDir mounts to tmpfs by default
Image I'm using: AMI ID: ami-06f6f988d150ee376 Region: ap-southeast-2 containerd version: 1.6.20+bottlerocket kubelet: 1.25.11-eks-984f31e
What I expected to happen: When emptyDir is specified without :Memory suffix, a new a disc based volume must be mounted to the pod.
What actually happened:
A tmpfs
based volume gets mounted instead.
How to reproduce the problem:
- Deploy a worker node with the above AMI.
- Create a pod with
emptyDir
volume without :Memory suffix. - Then shell into the bottlerocket os.
- From
sheltie
console in the admin-container,df -h
will show atmpfs
mountedemptyDir
volume for the pod inside /var/kubelet/pods//
Any workarounds appreciated too.
@arpanadhikari Thanks for raising this issue. We will look in to this and get back to you.
I believe that the storage media of an emptyDir volume is same as where /var/lib/kubelet resides. Here is the steps I followed to confirm that:
- Deploy a worker node with AMI ID: ami-06f6f988d150ee376 and in Region: ap-southeast-2.
- Create a pod with emptyDir volume without memory suffix.The pod yaml files is as follows:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: registry.k8s.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir:
sizeLimit: 500Mi
- Then shell into the Bottlerocket os using ssm.
- From sheltie console in the admin-container, Output of
df -h
has following Kubernetes related rows:
tmpfs 6.9G 12K 6.9G 1% /var/lib/kubelet/pods/0ca84690-8c97-475a-98a7-01c258fa85d3/volumes/kubernetes.io~projected/kube-api-access-rj4f4
tmpfs 6.9G 12K 6.9G 1% /var/lib/kubelet/pods/2c54920d-7374-45b3-807e-106002752101/volumes/kubernetes.io~projected/kube-api-access-4vbb4
tmpfs 170M 12K 170M 1% /var/lib/kubelet/pods/ea6b9f5c-4633-42c5-a9f3-0ca429a04b74/volumes/kubernetes.io~projected/kube-api-access-lrwhr
tmpfs 170M 12K 170M 1% /var/lib/kubelet/pods/d59e2a71-23f3-48af-bcd3-abf44608d29d/volumes/kubernetes.io~projected/kube-api-access-xbf7s
tmpfs 6.9G 12K 6.9G 1% /var/lib/kubelet/pods/ab707de0-57e1-4503-b8ac-1af3dcc63b82/volumes/kubernetes.io~projected/kube-api-access-rzlcf
In my opinion these volumes listed by df -h
are not the emptyDir volume that have been created using pod yaml file.
The emptyDir volume exists at:
/var/lib/kubelet/pods/<PODUID>/volumes/kubernetes.io~empty-dir/<EMPTYDIRVOLUME>
Where :
- PODUID can be retrieved using
kubectl get pods -n <namespace> <pod-name> -o jsonpath='{.metadata.uid}'
- - EMPTYDIRVOLUME will be same as name mentioned in volumes section in pod yaml file.
volumes:
- name: cache-volume
emptyDir:
sizeLimit: 500Mi
We can use df /file
command to identify the physical device where demo-volume is located.
The output for df /var/lib/kubelet/pods/39afc4a4-2223-4687-843d-97fb09173d5f/volumes/kubernetes.io~empty-dir/demo-volume
command is as
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/nvme1n1p1 82547144 980428 81550332 2% /var
Whereas Output for
df /var/lib/kubelet/pods/39afc4a4-2223-4687-843d-97fb09173d5f/volumes/kubernetes.io~projected/kube-api-access-s2nmf/
is
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 7212384 12 7212372 1% /var/lib/kubelet/pods/39afc4a4-2223-4687-843d-97fb09173d5f/volumes/kubernetes.io~projected/kube-api-access-s2nmf
This states that this emptyDir volume exists on the /dev/nvme1n1p1. Refer this for more info on how the storage media of an emptyDir volume is determined.
The device /dev/nvme1n1p1 is mounted on /var that can be checked using command lsblk
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme1n1 259:1 0 80G 0 disk
`-nvme1n1p1 259:16 0 80G 0 part /var
/opt
/mnt
/local
To confirm that this is persisting I created a file in the empty dir and then rebooted and it showed up after reboot.The contents of the emptyDir before and after reboot:
ls var/lib/kubelet/pods/PODUID/volumes/kubernetes.io~empty-dir/demo-volume/
example-test.txt
Can you check and confirm that this aligns with your configuration as well or not? If not can you share what is different.
@arpanadhikari Are you still facing this problem. If its resolved can this issue be closed or is there still something that we can help with?
Closing this issue as a false-positive. Looks like our terraform (terragrunt) code was blowing up the disk usage.
Thank you for your time and suggestions.