Travis Nielsen
Travis Nielsen
Ceph tracker created: https://tracker.ceph.com/issues/63391 @guits Could you take a look?
This looks related to https://github.com/ceph/ceph/pull/52429. Looking if there is a fix or workaround needed on the Rook side of this change...
Agreed, this is also a blocker for v18.2.1. I was just chatting with @guits, he is looking into it, we should have a better understanding of the fix by tomorrow....
Independent of the crash, the revert in https://github.com/ceph/ceph/pull/54392 is still needed to revert the change for LVs, otherwise Rook won't find the expected devices for the OSDs. This results in...
It's the same fundamental issue that affects c-v both during OSD creation in the osd prepare job, and the OSD activate in existing OSDs, so we are looking forward to...
Note that we expect v18.2.1 to have the fix. A test PR #13203 showed that the canary tests were all successful with the tag `quay.io/ceph/daemon-base:latest-reef-devel`. Now we just need to...
> @travisn any chance rook CI could test against `latest--devel` instead of just stable/released ceph tags? If we could test with this tag we wouldn't have to wait a regression...
Still an issue, waiting for the fix to be released in v17.2.8. Note that this issue does not affect v18.
Rook only supports bluestore, so you can be sure they are all running bluestore. Rook creates OSDs with `ceph-volume` in "raw" mode, which means the device or partition is directly...
> ceph deployment requires log disk and data disk, how to distinguish this Which disks do you mean? Each OSD only requires one disk.