Julian Pelizäus
Julian Pelizäus
> [@roosterfish](https://github.com/roosterfish) wonder if related to [#15664](https://github.com/canonical/lxd/issues/15664) Just found [this](https://github.com/canonical/microcloud/actions/runs/14930554727/job/41945583317?pr=774) error in the list of errors from another test suite, without any profile update happening upfront.
> [@roosterfish](https://github.com/roosterfish) are any other instances being created concurrently? No, only this instance in all cases.
First LXD tries to read the [`/sys/devices/rbd`](https://github.com/canonical/lxd/blob/main/lxd/storage/drivers/driver_ceph_utils.go#L1057-L1060) dir and then loops over all the found files. Afterwards the driver tries to read [`/sys/devices/rbd//pool`](https://github.com/canonical/lxd/blob/main/lxd/storage/drivers/driver_ceph_utils.go#L1078) which seems to not exist anymore. So...
> For Pure and PowerFlex we obtain a lock when mapping/unmapping a volume. Could it be that we do not do that for Ceph? Good point. Haven't found any info...
Maybe the cache update can be moved a few levels up in the API handler `storagePoolsPost`.
> In what situations are you getting these warnings? I have updated the PR's description to clarify on this. The IPs/ports are from the target systems (PowerFlex SDT's) the host...
> So is rsync directly communicating with powerflex or is Linux interpreting a write to a locally mapped nvme over TCP block device as a message being sent to the...
> I would try to put something like `echo 3 > /proc/sys/vm/drop_caches` just before each `qemu-img`/`rsync` calls and retest I will test this, thanks for the suggestion. Which value do...
@mihalicyn I have put this right in front of the `qemu-img` and `rsync` operations. Neither do the errors in the kernels log look different nor does any of the errors...
> @roosterfish shall we close this? As it's clearly not causing any issues/errors on the LXD side I will close it for now.