kvm-guest-drivers-windows icon indicating copy to clipboard operation
kvm-guest-drivers-windows copied to clipboard

virtio-fs looks a bit bad in performance.

Open daiaji opened this issue 3 years ago • 18 comments

CrystalDiskMark_20210726222405 I use an Intel 900p 280G Optane SSD, and use the btrfs file system, the mounting parameters are as follows: /dev/nvme0n1p2 on /home type btrfs (rw,noatime,compress=zstd:3,ssd,discard=async,space_cache=v2,subvolid=1527,subvol=/@home) The libvirt parameters are as follows:

<filesystem type='mount' accessmode='passthrough'>    
  <driver type='virtiofs' queue='1024'/>    
  <source dir='/home/test/Vfs'/>    
  <target dir='Vfs'/>    
</filesystem>

I don't know why the performance looks so bad, the continuous read and write performance is only one-fifteenth of the bare metal.

daiaji avatar Jul 26 '21 14:07 daiaji

Hello All,

Please help us understanding you use cases for using virtio-fs, and thus make us virtio-fs support better. Please participate in the discussion and add your use cases: https://github.com/virtio-win/kvm-guest-drivers-windows/discussions/726

Thanks a lot, Yan.

YanVugenfirer avatar Jan 31 '22 12:01 YanVugenfirer

@YanVugenfirer I noticed that this Issues was added with a feature request, does this mean that the win driver for virtio-fs just started performance optimization some time ago?

daiaji avatar Feb 01 '22 08:02 daiaji

Hi @daiaji, virtio-fs driver is relatively new driver in virtio-win drivers collection. And currently we view it as tech preview. That means we expect some issues, definitely those related to performance.

YanVugenfirer avatar Feb 01 '22 13:02 YanVugenfirer

I also meet the performance problem,virtiofs has a bad performance compared with samba in guest windows. Compared with guest linux, its continuous read and write performance is only one-third.

wangyan0507 avatar Mar 06 '23 06:03 wangyan0507

Hi @wangyan0507

How do you measure the performance?

viktor-prutyanov avatar Mar 06 '23 07:03 viktor-prutyanov

Hi @viktor-prutyanov

[Compared with linux guest] virtiofsd: ./virtiofsd --socket-path=/tmp/vhostqemu -o source=$TESTDIR -o cache=always qemu-6.2: ./build/qemu-system-aarch64
-chardev socket,id=char0,path=/tmp/vhostqemu
-device vhost-user-fs-pci,queue-size=1024,chardev=char0,tag=myfs
-m 4G -object memory-backend-memfd,id=mem1,size=4G,share=on -numa node,memdev=mem1
-machine virt,virtualization=off,its=off,gic-version=host
-accel kvm
-cpu host -smp 4
-bios ./QEMU_EFI.fd
-object iothread,id=disk-iothread
-device virtio-blk-pci,drive=win11,iothread=disk-iothread
-drive if=none,id=win11,format=qcow2,file=./win11.qcow2,overlap-check=none,cache=unsafe fio: win11 guest: fio -name=test -filename=./test -direct=1 -iodepth=8 -rw=read/write -ioengine=windowsaio -bs=1M -size=4g -group_reporting --time_based -runtime=20 ubuntu22.04 guest:fio -name=test -filename=./test -direct=1 -iodepth=8 -rw=read/write -ioengine=libaio -bs=1M -size=4g -group_reporting --time_based -runtime=20

test read(bs-1M) write(bs-1M) win11 281MB/s 300MB/s ubuntu2204 1078MB/s 1262MB/s

wangyan0507 avatar Mar 06 '23 08:03 wangyan0507

@wangyan0507

So, you're running VirtIO-FS on ARM64 ?

viktor-prutyanov avatar Mar 06 '23 08:03 viktor-prutyanov

@viktor-prutyanov Yes, the host is arm64 os.

The samba performace is using win11/ubuntu2204 guest: test read(bs-1M) write(bs-1M) win11 996MB/s 896MB/s ubuntu2204 935MB/s 889MB/s

It's better than virtiofs on win11, worse than virtiofs on ubuntu2204

wangyan0507 avatar Mar 06 '23 08:03 wangyan0507

@wangyan0507 Is the guest ARM64 as well or is it a guest running with binary translation?

YanVugenfirer avatar Mar 16 '23 16:03 YanVugenfirer

@wangyan0507 Is the guest ARM64 as well or is it a guest running with binary translation? ARM64. I found one reason, windows virtiofs driver does not support async IO.

wangyan0507 avatar Mar 17 '23 04:03 wangyan0507

For x86, it indeed has a performance decreation compared with linux guest.

Ws2022 WRITE: bw=111MiB/s (117MB/s), 891KiB/s-901KiB/s (912kB/s-922kB/s), io=128GiB (137GB), run=1164286-1177315msec READ: bw=122MiB/s (128MB/s), 979KiB/s-1351KiB/s (1003kB/s-1383kB/s), io=128GiB (137GB), run=776408-1070633msec VS

RHEL9 WRITE: bw=1095MiB/s (1148MB/s), 8759KiB/s-9644KiB/s (8969kB/s-9876kB/s), io=128GiB (137GB), run=108725-119716msec READ: bw=694MiB/s (728MB/s), 5555KiB/s-5632KiB/s (5688kB/s-5767kB/s), io=128GiB (137GB), run=186176-188771msec

Ws2022: "C:\Program Files (x86)\fio\fio\fio.exe" --name=stress --filename=Z:/test_file --ioengine=windowsaio --rw=write/read --direct=1 --bs=4K --size=1G --iodepth=256 --numjobs=128 --runtime=1800 --thread

RHEL9: /usr/bin/fio --name=stress --filename=/mnt/myfs/test_file --ioengine=libaio --rw=write/read --direct=1 --bs=4K --size=1G --iodepth=256 --numjobs=128 --runtime=1800

xiagao avatar Mar 17 '23 06:03 xiagao

Hi @wangyan0507

Could you please share how do you build VirtIO-FS for ARM64 Windows?

viktor-prutyanov avatar Mar 20 '23 08:03 viktor-prutyanov

I was also doing a performance test of virtiofs compared with samba, the result seems that virtiofs performance is better than samba. I created the samba on the host.

fio result of samaba on ws2022: WRITE: bw=47.0MiB/s (49.3MB/s), 376KiB/s-377KiB/s (385kB/s-386kB/s), io=82.8GiB (88.9GB), run=1801434-1802944msec READ: bw=34.5MiB/s (36.2MB/s), 245KiB/s-285KiB/s (251kB/s-292kB/s), io=60.8GiB (65.3GB), run=1803607-1803660msec

fio cmd line: "C:\Program Files (x86)\fio\fio\fio.exe" --name=stress --filename=\\10.73.72.116\test_smb\test_file --ioengine=windowsaio --rw=write/read --direct=1 --bs=4K --size=1G --iodepth=256 --numjobs=128 --runtime=1800 --thread

BTW, virtiofs version: virtiofsd-1.5.0-1.el9.x86_64 virtio-win-prewhql version: 0.1-234

xiagao avatar Mar 20 '23 09:03 xiagao

Test on same software and observe similar performance drop image

fecet avatar Jul 25 '23 04:07 fecet

Hi @wangyan0507

Could you please share how do you build VirtIO-FS for ARM64 Windows?

Sorry, forgot to reply. I use the offical virtio driver, and not build it by my self.

wangyan0507 avatar Jul 25 '23 06:07 wangyan0507