kvm-guest-drivers-windows icon indicating copy to clipboard operation
kvm-guest-drivers-windows copied to clipboard

virtiofsd exits as soon as viofs.sys is loaded

Open Marco98 opened this issue 4 years ago • 8 comments

Hello, i'm currently trying to setup the new virtiofs in my WS2019 virtual machine. But as soon as the viofs-driver is loaded, virtiofsd exits. libvirt qemu log:

2020-07-28 19:20:07.197+0000: Starting external device: virtiofsd
/usr/lib/qemu/virtiofsd --fd=29 -o source=/viofstest
2020-07-28 19:20:07.207+0000: starting up libvirt version: 6.5.0, qemu version: 5.0.0, kernel: 5.7.10-arch1-1, hostname: mspc
LC_ALL=C \
PATH=/usr/local/sbin:/usr/local/bin:/usr/bin \
HOME=/var/lib/libvirt/qemu/domain-7-win \
XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-7-win/.local/share \
XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-7-win/.cache \
XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-7-win/.config \
QEMU_AUDIO_DRV=none \
/usr/bin/qemu-system-x86_64 \
-name guest=win,debug-threads=on \
-S \
-object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-7-win/master-key.aes \
-blockdev '{"driver":"file","filename":"/usr/share/ovmf/x64/OVMF_CODE.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \
-blockdev '{"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/win_VARS.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \
-machine pc-q35-5.0,accel=kvm,usb=off,vmport=off,dump-guest-core=off,kernel_irqchip=on,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format \
-cpu host,migratable=on,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vendor-id=whatever,kvm=off \
-m 2048 \
-overcommit mem-lock=off \
-smp 8,sockets=8,cores=1,threads=1 \
-object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu/7-win,share=yes,size=2147483648 \
-numa node,nodeid=0,cpus=0-7,memdev=ram-node0 \
-uuid c8efa194-52f8-4526-a0f8-29a254839b55 \
-display none \
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=29,server,nowait \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=localtime,driftfix=slew \
-global kvm-pit.lost_tick_policy=delay \
-no-hpet \
-no-shutdown \
-global ICH9-LPC.disable_s3=1 \
-global ICH9-LPC.disable_s4=1 \
-boot menu=off,strict=on \
-device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \
-device pcie-pci-bridge,id=pci.2,bus=pci.1,addr=0x0 \
-device pcie-root-port,port=0x11,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x1 \
-device pcie-root-port,port=0x12,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x2 \
-device pcie-root-port,port=0x13,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x3 \
-device pcie-root-port,port=0x14,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x4 \
-device pcie-root-port,port=0x15,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x5 \
-device pcie-root-port,port=0x16,chassis=8,id=pci.8,bus=pcie.0,addr=0x2.0x6 \
-device pcie-root-port,port=0x17,chassis=9,id=pci.9,bus=pcie.0,addr=0x2.0x7 \
-device pcie-root-port,port=0x18,chassis=10,id=pci.10,bus=pcie.0,multifunction=on,addr=0x3 \
-device pcie-root-port,port=0x19,chassis=11,id=pci.11,bus=pcie.0,addr=0x3.0x1 \
-device pcie-root-port,port=0x1a,chassis=12,id=pci.12,bus=pcie.0,addr=0x3.0x2 \
-device pcie-root-port,port=0x8,chassis=13,id=pci.13,bus=pcie.0,multifunction=on,addr=0x1 \
-device pcie-root-port,port=0x9,chassis=14,id=pci.14,bus=pcie.0,addr=0x1.0x1 \
-device pcie-root-port,port=0xa,chassis=15,id=pci.15,bus=pcie.0,addr=0x1.0x2 \
-device pcie-root-port,port=0xb,chassis=16,id=pci.16,bus=pcie.0,addr=0x1.0x3 \
-device nec-usb-xhci,id=usb,bus=pci.7,addr=0x0 \
-device virtio-serial-pci,id=virtio-serial0,bus=pci.14,addr=0x0 \
-blockdev '{"driver":"host_device","filename":"/dev/zvol/ssd/windows","aio":"threads","node-name":"libvirt-3-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
-device virtio-blk-pci,bus=pci.3,addr=0x0,drive=libvirt-3-format,id=virtio-disk0,bootindex=1,write-cache=on \
-blockdev '{"driver":"host_device","filename":"/dev/zvol/ssd/windows-ssdgames1","aio":"threads","node-name":"libvirt-2-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
-device virtio-blk-pci,bus=pci.9,addr=0x0,drive=libvirt-2-format,id=virtio-disk1,write-cache=on \
-blockdev '{"driver":"host_device","filename":"/dev/zvol/hdd/win-games1","aio":"threads","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
-device virtio-blk-pci,bus=pci.13,addr=0x0,drive=libvirt-1-format,id=virtio-disk2,write-cache=on \
-chardev socket,id=chr-vu-fs0,path=/var/lib/libvirt/qemu/domain-7-win/fs0-fs.sock \
-device vhost-user-fs-pci,chardev=chr-vu-fs0,tag=viofstest,iommu_platform=on,ats=on,bus=pci.15,addr=0x0 \
-netdev tap,fd=32,id=hostnet0,vhost=on,vhostfd=34 \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:fb:0c:28,bus=pci.10,addr=0x0 \
-chardev spicevmc,id=charchannel0,name=vdagent \
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 \
-device virtio-keyboard-pci,id=input0,bus=pci.12,addr=0x0 \
-device virtio-tablet-pci,id=input1,bus=pci.8,addr=0x0 \
-device virtio-mouse-pci,id=input2,bus=pci.11,addr=0x0 \
-device ich9-intel-hda,id=sound0,bus=pci.2,addr=0x1 \
-device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 \
-device vfio-pci,host=0000:08:00.0,id=hostdev0,bus=pci.5,addr=0x0,rombar=1 \
-device vfio-pci,host=0000:08:00.1,id=hostdev1,bus=pci.6,addr=0x0,rombar=1 \
-device virtio-balloon-pci,id=balloon0,bus=pci.4,addr=0x0 \
-object input-linux,id=kbd1,evdev=/dev/input/by-path/pci-0000:0a:00.3-usb-0:3:1.0-event-kbd,grab_all=on,repeat=on \
-object input-linux,id=mouse1,evdev=/dev/input/by-path/pci-0000:0a:00.3-usb-0:4:1.0-event-mouse \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
2020-07-28 19:20:07.207+0000: Domain id=7 is tainted: high-privileges
2020-07-28 19:20:07.207+0000: Domain id=7 is tainted: custom-argv
2020-07-28 19:20:07.207+0000: Domain id=7 is tainted: host-cpu
<--- VIOFS DRIVER GETS LOADED HERE --->
2020-07-28T19:20:57.568089Z qemu-system-x86_64: Failed to read msg header. Read -1 instead of 12. Original request 1566376224.
2020-07-28T19:20:57.568120Z qemu-system-x86_64: Fail to update device iotlb
2020-07-28T19:20:57.568147Z qemu-system-x86_64: Failed to read msg header. Read 0 instead of 12. Original request 1566376528.
2020-07-28T19:20:57.568151Z qemu-system-x86_64: Fail to update device iotlb
2020-07-28T19:20:57.568153Z qemu-system-x86_64: Failed to set msg fds.
2020-07-28T19:20:57.568156Z qemu-system-x86_64: vhost_set_vring_call failed: Invalid argument (22)
2020-07-28T19:20:57.568160Z qemu-system-x86_64: Failed to set msg fds.
2020-07-28T19:20:57.568162Z qemu-system-x86_64: vhost_set_vring_call failed: Invalid argument (22)
2020-07-28T19:20:57.568296Z qemu-system-x86_64: Failed to read from slave.

If i try to start virtiofsd afterwards with the "-d" parameter it prints:

[6470894613556] [ID: 00316440] virtio_session_mount: Waiting for vhost-user socket connection...
[6470894631129] [ID: 00316440] vhost socket accept: Bad file descriptor

Many thanks for any advice.

Marco98 avatar Jul 28 '20 19:07 Marco98

I narrowed down the issue. Tried it on a fresh vm without any special configuration like pci-passthrough. The issue occurs if "iommu_platform=on" is used. With this knowledge i could reproduce this issue also in a linux-based vm. So after all it's not a windows-driver-only issue.

Marco98 avatar Jul 31 '20 20:07 Marco98

@hammerg @ybendito Please take a look at why "iommu_platform=on" is causing a failure.

@Marco98 Can you please provide the info regarding the host - CPU info will be the most interesting one?

Thanks, Yan.

YanVugenfirer avatar Jul 31 '20 21:07 YanVugenfirer

CPU AMD Ryzen 7 1700
Chipset X370
Mainboard AX370-Gaming 5 (FW: F50a)
Kernel Linux 5.7.10
QEMU Version 5.0.0
OS Arch Linux amd64
relevant kernel cmdline amd_iommu=on iommu=pt vfio-pci.ids=10de:13c0,10de:0fbb
modprobe kvm_amd opts options kvm_amd npt=1 nested=1 avic=1
modprobe kvm opts options kvm ignore_msrs=1

Just let me know if you need anything else. Many Thanks.

Marco98 avatar Jul 31 '20 21:07 Marco98

Thanks!

I think it is similar to https://bugzilla.redhat.com/show_bug.cgi?id=1842832

YanVugenfirer avatar Aug 01 '20 19:08 YanVugenfirer

@Marco98 @YanVugenfirer where are we at with that issue?

barolo avatar Oct 07 '20 01:10 barolo

@hammerg is an owner of the virtio-fs driver. He is on vacation now.

I suggest removing "iommu_platform=on" setting from command line for now.

YanVugenfirer avatar Oct 07 '20 08:10 YanVugenfirer

Hello All,

Please help us understanding you use cases for using virtio-fs, and thus make us virtio-fs support better. Please participate in the discussion and add your use cases: https://github.com/virtio-win/kvm-guest-drivers-windows/discussions/726

Thanks a lot, Yan.

YanVugenfirer avatar Jan 31 '22 12:01 YanVugenfirer

@Marco98 Is this bug solved for you in the latest release?

YanVugenfirer avatar Sep 05 '22 13:09 YanVugenfirer

This seems like it might still be broken though it's not because of the use of the iommu_platform option. I'm not sure what is causing it but the symptom is the same, as soon as the guest connects then virtiofsd exits.

$ virtiofsd --version
virtiofsd backend 1.6.0

$ qemu-system-x86_64 --version
QEMU emulator version 8.0.50 (v8.0.0-192-ga14b8206c5)

virtiofsd exits with message:

ERROR virtiofsd] Waiting for daemon failed: HandleRequest(InvalidParam)

qemu also prints:

qemu-system-x86_64: Unexpected end-of-file before all data were read

c--- avatar Apr 26 '23 04:04 c---

Hi @c---

At the moment, virtiofsd doesn't use DMA API which is required to work with IOMMU.

Also, the original issue seems to be outdated.

viktor-prutyanov avatar Jun 01 '23 08:06 viktor-prutyanov