Results 284 comments of Max Makarov

I've added `enableCrushUpdates: true` to existing replicated pools and everything was okay, but when I added it to `erasure coded` pool I got this error: ``` I | ceph-block-pool-controller: creating...

```yaml apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: rgw-data-ec namespace: rook-ceph spec: enableCrushUpdates: true application: rgw failureDomain: host deviceClass: hdd erasureCoded: dataChunks: 4 codingChunks: 2 parameters: bulk: "true" compression_mode: force target_size_ratio:...

My current erasure code profile: ``` kubectl rook-ceph ceph osd erasure-code-profile get rgw-data-ec_ecprofile Info: running 'ceph' command with args: [osd erasure-code-profile get rgw-data-ec_ecprofile] crush-device-class= crush-failure-domain=host crush-num-failure-domains=0 crush-osds-per-failure-domain=0 crush-root=default jerasure-per-chunk-alignment=false k=4...

It seems `enableCrushUpdates` is completely broken when using `erasure coded` pools. Neither does it update the `erasure code profile` nor the `crush rule`.

@travisn I managed to do it manually: ```bash ceph osd erasure-code-profile set rgw-data-ec_ecprofile \ k=4 m=2 \ plugin=jerasure technique=reed_sol_van w=8 \ crush-failure-domain=host crush-root=default \ crush-device-class=hdd \ --force --yes-i-really-mean-it ceph osd...

> You are seeing the PGs move around as expected to respect the new failure domain, and everything is working properly with that pool in your tests? yes I've changed...

> [@maxpain](https://github.com/maxpain) Good to hear it appears to be working, though I don't expect to automate it in rook since ceph doesn't support it. What do you mean? Is it...

```yaml apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy metadata: name: hubble-relay namespace: kube-system spec: endpointSelector: matchLabels: k8s-app: hubble-relay egress: - toEntities: - host - remote-node toPorts: - ports: - port: "4244" protocol: TCP...

I was getting these errors after the rollout process. Each pod was healthy. I had to reboot my client application to get rid of this error. I need to reproduce...

This could also potentially simplify "Kubernetes as a Service" implementations, in which control plane nodes are completely hidden and managed (like GKE).