Parth Arora
Parth Arora
I am out of thoughts on this ``` 2024-05-20 13:30:06.121719 D | ceph-crashcollector-controller: deleting cronjob if it exists... 2024-05-20 13:30:06.121746 E | ceph-crashcollector-controller: context canceled ``` @travisn do you have...
Looks like the same problem, `c.clusterInfo.Context.Err()` clusterInfo would be nil but we are accessing its Context. and seems no option to disable telemetry,
@travisn can we suggest updating to 1.10?
Simulation: Created a sample crash by creating a file in /var/lib/ceph/crash/ I was able to see the crash in `ceph crash ls` But crash collector pods was not logging it
@SrushtiSapkale you can read for this on ceph and rook docs https://rook.io/docs/rook/latest-release/CRDs/Cluster/ceph-cluster-crd/?h=crash+collector#cluster-settings https://docs.ceph.com/en/quincy/mgr/crash/
@travisn just a clarification, is this setting not conflicting with the device class we choose for per storageclassdevicesets
@KKonak you are in which platform?
@zhangdeshuai1999 you can try this out https://rook.io/docs/rook/latest-release/CRDs/Cluster/external-cluster/#exporting-rook-to-another-cluster
@Nathanael-Mtd I think we need to delete the old storageclass and with upgrades it should create a new one, with same name. @travisn Or can we add an automation for...
> I don't think Helm has such mechanism to recreate CRs We can try helm hook ``` metadata: annotations: "helm.sh/hook": pre-upgrade "helm.sh/hook-delete-policy": before-hook-creation ``` I will try this