Gabriel Mougard

Results 68 comments of Gabriel Mougard

Yes, sorry for not mentioning that, but `ceph01`, `ceph02` and `ceph03` all share the same ceph pool. It's just that `ceph02` is not in the LXD cluster. ``` root@ceph01:~# microceph...

Here is my step by step setup: * Create a ceph cluster on `ceph01`, `ceph02`, `ceph03` * `microceph cluster bootstrap` on `ceph01` and then `microceph cluster add ceph02` and `microceph...

@ru-fu the scenario you provided (with container and vm images) works on my side. Could you attach the LXD server logs (`lxc monitor --pretty`) with the reproducer you provided? The...

@ru-fu Are you running LXD in a cluster configuration on is it in standalone mode? Also, is it causing issues if you upgrade your LXD deployment (maybe this that refreshing...

@edlerd URLs of the form `/1.0/storage-pools/my-ceph-pool/volumes/image/93de62f5c2a8cee537636264e33173f2bbb8fe219eb1a83253119ad2bb37cef` are actually valid (see here: https://github.com/canonical/lxd/blob/4e8c581c0df76b170a5e8e36a2c165a1735a40fe/lxd/storage_volumes.go#L1663C1-L1708C74) and I'm not facing a 404 issue for ZFS backed image volumes for an orphaned image on my...

Just for terminology, does 'orphaned' image means that the image is stored in a volume but not currently used by an instance? Or is there something else that I'm missing...

@edlerd Ah I see! I'd naively think about a data corruption issue (maybe an admin removed the underlying image volume manually inside ceph). Ok, I'll first try to reproduce the...

@edlerd are you able to reproduce the above scenario ?