nydus
nydus copied to clipboard
how to remove image cache?
I tested the container startup acceleration function of nydus, but my local system should have a cache, even if I exec docker rmi -f $imagename to delete the image every time, I always pull two or three layers from the remote Harbor (this image should have dozens of layers). How to clear the local cache, I want to pull the all layers from the remote Harbor.
- Make sure all nydus image and the containers using nydus image has been removed, and no nydusd processes:
ps aux | grep nydusd; - If the nydus snapshotter root directory is
/var/lib/containerd-nydus, thenrm -rf /var/lib/containerd-nydus/cache/*; - Clean page cache by
sync; echo 3 > /proc/sys/vm/drop_caches;
- Make sure all nydus image and the containers using nydus image has been removed, and no nydusd processes:
ps aux | grep nydusd;- If the nydus snapshotter root directory is
/var/lib/containerd-nydus, thenrm -rf /var/lib/containerd-nydus/cache/*;- Clean page cache by
sync; echo 3 > /proc/sys/vm/drop_caches;
I tried the above steps, but it didn't work. Additionally, I found that there is a lot of data related to nydus image in the/var/lib/containerd nydus/snapshot directory. Is this directory a cache and can it be deleted?
root@node1:/var/lib/containerd-nydus# docker rmi xx.xx.xx.xx:5000/pytorch/pytorch:nydus
Error response from daemon: No such image: xx.xx.xx.xx:5000/pytorch/pytorch:nydus
root@node1:/var/lib/containerd-nydus# ps aux | grep nydusd
root 1596582 0.0 0.0 6476 2172 pts/1 S+ 09:29 0:00 grep --color=auto nydusd
root@node1:/var/lib/containerd-nydus# sync; echo 3 > /proc/sys/vm/drop_caches
root@node1:/var/lib/containerd-nydus# docker pull xx.xx.xx.xx:5000/pytorch/pytorch:nydus
6142f15fb337: Download complete
763fe9b5fd66: Download complete
10e55b78c022: Download complete
xx.xx.xx.xx:5000/pytorch/pytorch:nydus
root@node1:/var/lib/containerd-nydus# du -sh cache/
24K cache/
root@node1:/var/lib/containerd-nydus# du -sh snapshots/
13G snapshots/
root@node1:/var/lib/containerd-nydus# ls snapshots/
1 189 191 193 195 197 199 200 202 204 206 208 210 212 214 216 218 220 222 224 226 228 230 232 234 236 238 240 242 3 5
188 190 192 194 196 198 2 201 203 205 207 209 211 213 215 217 219 221 223 225 227 229 231 233 235 237 239 241 243 4
It seems that some snapshots have not been reclaimed, maybe containerd hasn't started GC? Can you find any files in these snapshot directories?
It seems that some snapshots have not been reclaimed, maybe containerd hasn't started GC? Can you find any files in these snapshot directories?
just like this:
root@node1:/var/lib/containerd-nydus/snapshots# tree -L 3
.
├── 1
│ ├── fs
│ │ ├── bin
│ │ ├── dev
│ │ ├── etc
│ │ ├── home
│ │ ├── lib
│ │ ├── media
│ │ ├── mnt
│ │ ├── opt
│ │ ├── proc
│ │ ├── root
│ │ ├── run
│ │ ├── sbin
│ │ ├── srv
│ │ ├── sys
│ │ ├── tmp
│ │ ├── usr
│ │ └── var
│ └── work
├── 188
│ ├── fs
│ │ ├── bin -> usr/bin
│ │ ├── boot
│ │ ├── dev
│ │ ├── etc
│ │ ├── home
│ │ ├── lib -> usr/lib
│ │ ├── lib32 -> usr/lib32
│ │ ├── lib64 -> usr/lib64
│ │ ├── libx32 -> usr/libx32
│ │ ├── media
│ │ ├── mnt
│ │ ├── opt
│ │ ├── proc
│ │ ├── root
│ │ ├── run
│ │ ├── sbin -> usr/sbin
│ │ ├── srv
│ │ ├── sys
│ │ ├── tmp
│ │ ├── usr
│ │ └── var
│ └── work
......
If no nydusd processes are alive and the snapshot directories can be accessed, these snapshots should belong to the OCI v1 images, maybe try to remove the OCI v1 images.
If no nydusd processes are alive and the snapshot directories can be accessed, these snapshots should belong to the OCI v1 images, maybe try to remove the OCI v1 images.
I pulled the OCI v1 image xx.xx.xx.xx:5000/pytorch/pytorch:21.10-py3, but when I remove the image docker rmi xx.xx.xx.xx:5000/pytorch/pytorch:21.10-py3, the snapshot directories are still there. Did I miss some important steps?
root@node1:/var/lib/containerd-nydus# docker rmi xx.xx.xx.xx:5000/pytorch/pytorch:21.10-py3
Error response from daemon: No such image: xx.xx.xx.xx:5000/pytorch/pytorch:21.10-py3
root@node1:/var/lib/containerd-nydus# du -sh snapshots/
13G snapshots/
root@node1:/var/lib/containerd-nydus# tree -L 3 snapshots/
snapshots/
├── 1
│ ├── fs
│ │ ├── bin
│ │ ├── dev
│ │ ├── etc
│ │ ├── home
│ │ ├── lib
│ │ ├── media
│ │ ├── mnt
│ │ ├── opt
│ │ ├── proc
│ │ ├── root
│ │ ├── run
│ │ ├── sbin
│ │ ├── srv
│ │ ├── sys
│ │ ├── tmp
│ │ ├── usr
│ │ └── var
│ └── work
├── 188
│ ├── fs
│ │ ├── bin -> usr/bin
│ │ ├── boot
│ │ ├── dev
│ │ ├── etc
│ │ ├── home
│ │ ├── lib -> usr/lib
│ │ ├── lib32 -> usr/lib32
│ │ ├── lib64 -> usr/lib64
│ │ ├── libx32 -> usr/libx32
│ │ ├── media
│ │ ├── mnt
│ │ ├── opt
│ │ ├── proc
│ │ ├── root
│ │ ├── run
│ │ ├── sbin -> usr/sbin
│ │ ├── srv
│ │ ├── sys
│ │ ├── tmp
│ │ ├── usr
│ │ └── var
│ └── work
@adamqqqplay Any time to take a look at this? It seems the containerd did not reclaim these OCI image snapshots when using docker.
@adamqqqplay Any time to take a look at this? It seems the containerd did not reclaim these OCI image snapshots when using docker.
It looks like there is something missing between docker and container. I'll try to reproduce this.
I found that when I execute docker rmi, the layers of image in directory /var/lib/containerd/io.containerd.content.v1.content/blobs/sha256 are not deleted.
I found that when I execute
docker rmi, the layers of image in directory/var/lib/containerd/io.containerd.content.v1.content/blobs/sha256are not deleted.
Maybe depend on containerd GC policy?
Additionally it would be very helpful if the cleanup could be added as subcommand to automate this task and only delete cache that is not linked to any running images.
Perhaps there is some dirty data in the /var/lib/containerd and/var/lib/containerd-nydus directories. After cleaning these two folders and restarting containerd and nydus-snapshot. it did work