ceph-csi icon indicating copy to clipboard operation
ceph-csi copied to clipboard

K8s POD delete data very slowly

Open xiaoli007 opened this issue 10 months ago • 2 comments

On the Ubuntu 24.04 operating system, with Ceph version 19.2.0, Ceph-CSI version 3.12.3, and K8s mounting PVCs via Ceph-CSI, performing an rm -rf operation on a 200GB file inside a pod takes about an hour. Without Ceph, on a standalone server, deleting 200GB takes around 10 minutes. It used Sata Hdd。 How can I optimize this issue?

xiaoli007 avatar Feb 07 '25 11:02 xiaoli007

ceph pool used rbd mode

xiaoli007 avatar Feb 07 '25 11:02 xiaoli007

Ceph-CSI is not in the I/O path, it only creates/mounts RBD-images as volumes. There might be a configuration issue of some kind, but you would need to check with others that understand performance characteristics of Ceph better. Easiest is to reach out on their Slack or IRC and explain/ask there.

nixpanic avatar Feb 07 '25 13:02 nixpanic