Parth Arora

Results 278 comments of Parth Arora

> I've fixed it with but tbh I am not sure if it's the best way > > ``` > for sc in rook-ceph-block-main ceph-filesystem; do > kubectl patch storageclass...

@sp98 we can use the keda for the cephcluster cr too and for metrics we can take memory and cpu in considerations, wdyt? BUt that would be hpa

@travisn In case of multiple replica's ``` failureDomain: zone specificDomain: zone1 replicated: size: 3 subFailureDomain: host ``` How we are planning to have data replicated on a specific zone, ├──...

> [@parth-gr](https://github.com/parth-gr) Would [this example from external clusters](https://rook.io/docs/rook/latest-release/CRDs/Cluster/external-cluster/topology-for-external-mode/#example-configuration) also apply to this scenario? That sample has a CRUSH rule that picks a specific zone, then a step for different osds....

@travisn I was thinking about the crushroot setting https://github.com/rook/rook/blob/master/pkg/apis/ceph.rook.io/v1/types.go#L946 Why can't we use it in place of a specific failure domain spec

@travisn It worked ``` sh-5.1$ ceph osd crush rule dump replicapool3 { "rule_id": 7, "rule_name": "replicapool3", "type": 1, "steps": [ { "op": "take", "item": -16, "item_name": "us-east-1~ssd" }, { "op":...

> As discussed, this likely is causing the overlapping roots issue with the pg autoscaler, and the PGs won't scale (confirm in the mgr logs). We will need to find...