ceph-csi icon indicating copy to clipboard operation
ceph-csi copied to clipboard

librbd QoS settings for RBD based PVs

Open mmgaggle opened this issue 5 years ago • 15 comments

Describe the feature you'd like to have

The ability to set librbd QoS settings on a PV to limit how much IO can be consumed from the Ceph Cluster.

The exactly limits would be informed through the storage-class configuration. Ideally we would support three different types of limits:

  1. static rbd_qos_iops_limit and rbd_qos_bps_limit per volume
  2. dynamic rbd_qos_iops_limit and rbd_qos_bps_limit per volume as a function of the PV size (eg. 3 IOPS per GB, 100 MB/s per TB with a configurable rbd_qos_schedule_tick_min.

A PVC could specify the number of IOPs from storage classes of the second type, but it would adjust the capacity requested based on the above ratio configured in the storage class definition.

What is the value to the end user?

Many users were frustrated by IO noisy neighbor issues in early Ceph deployments that were catering to OpenStack environments. Folks started to implement QEMU throttling at the virtio-blk/scsi and this became much more manageable. Capacity based IOPs further improved on the situation by providing familiar a public cloud like experience (vs static per volume limits).

We want Kubernetes and OpenShift users to have improved noisy neighbor isolation too!

How will we know we have a good solution?

  1. Configure ceph-csi to use nbd-rbd approach.
  2. Provision volume from storage class as configured above.
  3. CSI provisioner would set limit on RBD image.
  4. fio test against PV would confirm that IOPs limits were being enforced.

Once resize work is finished, we'll need to ensure new limits are applied when a volume is re-sized.

mmgaggle avatar Aug 02 '19 21:08 mmgaggle

Is anyone working on this? What is the status of this?

fire avatar Dec 15 '19 20:12 fire

I don't believe anyone is working on it -- and it really doesn't make much sense since rbd-nbd isn't really production worthy (right now), so we wouldn't want to encourage even more folks using it.

The best longer term solution would be to ensure cgroups v2 is being utilized on the k8s node so that generic block rate limiting controls can be applied (which would handle krbd both rbd-nbd). I'm not sure of the realistic timeline for cgroups v2 integration in k8s (it just became the default under Fedora 31).

dillaman avatar Dec 15 '19 23:12 dillaman

Do you know where cgroups v2 integration in k8s for limiting block rates is tracked?

fire avatar Dec 16 '19 01:12 fire

This [1] provides a really good overview and a theoretical timeline

[1] https://medium.com/nttlabs/cgroup-v2-596d035be4d7

dillaman avatar Dec 16 '19 01:12 dillaman

The bigger problem with cgroups is that they only provide independent limits for read and write, compared with libdrbd/virtio which provide the ability to express limits against the aggregate of reads and writes.

If I want to limit a given PV to 100 IOPs, I can't do that with cgroups. I can only set a write IOPs limit to (50, 30, 10) and another distinct limit on read IOPS (50, 70, 90). A client can't trade a write IO for a read IO, or vis versa.

mmgaggle avatar Jan 20 '20 23:01 mmgaggle

Probably would need something like this KEP to do accounting -

https://github.com/kubernetes/enhancements/pull/1353

Basically if you know a cluster can provide 100k iops and 100TB, then you need to add up the PVs (qty * static limit, or capacity * ratio limit) to make sure you're not oversubscribed.

mmgaggle avatar Apr 02 '20 06:04 mmgaggle

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.

stale[bot] avatar Oct 04 '20 07:10 stale[bot]

This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.

stale[bot] avatar Oct 12 '20 07:10 stale[bot]

Why isn't this a top priority issue? a rogue pod can destroy the ceph cluster

matti avatar Jan 23 '21 09:01 matti

"The noisy neighbor problem". Without this feature Rook won't be useable in production, as you can slow down the whole cluster by e.g extracting a huge gzip file.

(https://github.com/rook/rook/issues/1499#issue-297467894)

matti avatar Jan 23 '21 09:01 matti

Why isn't this a top priority issue? a rogue pod can destroy the ceph cluster

@matti - do you have any code for this you'd like to share with the community? That would be most welcome and will certainly help prioritizing this work.

mykaul avatar Jan 24 '21 10:01 mykaul

do you have any code for this you'd like to share with the community?

Are you requesting example code for demonstrating the problem or the solution?

The example code for demonstrating the problem is very simple. It's enough to unpack a huge gzip file or run dd. You just need to run them parallel from different nodes so that these rogue clients overwhelm the Ceph cluster.

TL;DR Any centralized limit for IOPS (eg. 3 IOPS per GB, 100 MB/s per TB) would be needed. The limit needs to be on the top so that you can't workaround the limit by having enough rogue clients in parallel.

pre avatar Jan 25 '21 08:01 pre

do you have any code for this you'd like to share with the community?

Are you requesting example code for demonstrating the problem or the solution?

The solution. I'm well aware of the issue.

mykaul avatar Jan 25 '21 13:01 mykaul

Hi, are there any plans or updates on this topic please?

michaelgeorgeattard avatar Sep 07 '21 13:09 michaelgeorgeattard

I am also very interested in this feature :+1:

knfoo avatar Sep 30 '21 13:09 knfoo

Great feature. Any news here?

HaveFun83 avatar Dec 08 '22 14:12 HaveFun83

no news, rogue pod can still destroy the entire ceph cluster

matti avatar Dec 08 '22 14:12 matti

some container runtimes (such as cri-o) support iops and BW limitation on pods. You need to add some annotation to pods (with a policy engine like kyverno or custom webhooks) to ensure pods are limited. see here fore more info: https://github.com/cri-o/cri-o/pull/4873

m-yosefpor avatar Dec 08 '22 16:12 m-yosefpor

but bandwidth limit ensures that performance is always limited, eg. always bad.

This issue is about Quality of Service where pods would be allowed to burst to maximum while still preventing exhaustion.

matti avatar Dec 08 '22 17:12 matti