Alex Zheng
Alex Zheng
``` $ zfs list -o name,refer,used,avail,volsize,type -t volume,snapshot NAME REFER USED AVAIL VOLSIZE TYPE linstor_thinpool/testvol_00000 231M 231M 96.2G 10.0G volume ``` Version: ``` $ modinfo zfs filename: /lib/modules/4.15.0-188-generic/updates/dkms/zfs.ko version: 2.1.4-0york0~18.04...
I understand the philosophy here, but would recommend changing state "Unknown" to something more sensible like "Raw" (since it has no drbd layer) The "InUse" column provides a bottom view...
It is the `linstor volume list` that shows the 100% allocation. And file_thin backend uses loop device instead of lv. ``` # linstor volume list +---------------------------------------------------------------------------------------------------------------------------------------------+ | Node | Resource...
hi, @ghernadi. v1.4.3 fixed the allocation display for file backend, but seems to bring in a new issue. Here, k8s-worker-3,4,6 are centos, 7,9 are ubuntu. ``` ERROR: Description: Node: 'k8s-worker-3',...
It seems a directory issue. the pool is at /var/lib/piraeus/storagepool/DfltStorPool ``` # linstor --no-utf8 sp lp k8s-worker-3 DfltStorPool +-----------------------------------------------------------------+ | Key | Value | |=================================================================| | StorDriver/FileDir | /var/lib/piraeus/storagepools/DfltStorPool |...
Also I run the command manually on the host ``` # stat -c %B %b /var/lib/piraeus/storagepools/DfltStorPool/pvc-74577f5f-c621-43d6-ad03-b81b14a2f321_00000.img stat: cannot stat '%b': No such file or directory 512 ```
Also, on ubuntu, why stat .snap file? ``` ERROR: Description: Node: 'k8s-worker-9', resource: 'pvc-53dcdb03-04a6-439a-bf71-a630b988e54d', volume: 0 - Device provider threw a storage exception Details: Command 'stat -c %B %b /var/lib/snapd/snaps/core_8592.snap'...
hi @ghernadi As of 1.5.2, this issue still persists.
hi @ghernadi, 1.5.2 seems unable to create loop device at all. ``` $ linstor v l +----------------------------------------------------------------------------------------------------------------------------------------+ | Node | Resource | StoragePool | VolumeNr | MinorNr | DeviceName |...
@ghernadi Please try recreate the issue using file-thin pool instead of an lvmpool.