devZer0

Results 144 comments of devZer0

> I believe some if not all of this may be improved with https://github.com/openzfs/zfs/pull/14359 no, unfortunately not. metadata / dnode information still getting evicted to early and you have no...

could you describe your environment a little bit? kvm / vm settings/config for example ? what kvm version/management solution? no corruption/dataloss besides partition-table loss? where are the zvol's located on?...

does this happen with zvols only? how does it behave with ordinary file, e.g. if you convert vmdk to zfs backed file? also have a look at this one: https://github.com/openzfs/zfs/issues/7631...

oh, and what about this one ? https://bugzilla.proxmox.com/show_bug.cgi?id=2624

mhh ,weird. i would try triggering that problem with an mostly incompressible and consistent "write only" load to see if it makes a difference. maybe qemu-img is doing something special......

i can easily reproduce your issue on my recent proxmox system with latest zfs 0.8.3 while issuing the following command on the proxmox host (filling the zvol of an offline...

i have found with further testing that this seems completely unrelated to kvm. i also get completely stalling read from zvol on ssd when doing large streaming write to zvol...

> while with qemu-img something happen to zvol, that things go down the hill, while with dd it does not lead to crashes i think this is just a matter...

here we go. unfortunately, i don't see significant improvement. compared to file or lvm, writes to zvol are slow, while reads severely starve ``` Test System: platform: kvm/proxmox virtual machine...

@layer7gmbh , what type of kvm host is this? it seems it's NOT proxmox based, but most (if not all, besides this one) of the reports i see on lost...