kvm-guest-drivers-windows
kvm-guest-drivers-windows copied to clipboard
[virtio-scsi] Install Windows 2019 fail, the max io size is too large
Describe the bug This is the linux device queue attr using the same vhost-user-scsi-pci device:
# grep "" /sys/block/sda/queue/*
/sys/block/sda/queue/add_random:0
/sys/block/sda/queue/chunk_sectors:0
/sys/block/sda/queue/dax:0
/sys/block/sda/queue/discard_granularity:0
/sys/block/sda/queue/discard_max_bytes:0
/sys/block/sda/queue/discard_max_hw_bytes:0
/sys/block/sda/queue/discard_zeroes_data:0
/sys/block/sda/queue/dma_alignment:3
/sys/block/sda/queue/fua:0
/sys/block/sda/queue/hw_sector_size:512
/sys/block/sda/queue/io_poll:0
/sys/block/sda/queue/io_poll_delay:-1
/sys/block/sda/queue/io_timeout:30000
/sys/block/sda/queue/iostats:1
/sys/block/sda/queue/logical_block_size:512
/sys/block/sda/queue/max_discard_segments:1
/sys/block/sda/queue/max_hw_sectors_kb:32767
/sys/block/sda/queue/max_integrity_segments:0
/sys/block/sda/queue/max_sectors_kb:512
/sys/block/sda/queue/max_segment_size:65536
/sys/block/sda/queue/max_segments:126
/sys/block/sda/queue/minimum_io_size:4096
/sys/block/sda/queue/nomerges:0
/sys/block/sda/queue/nr_requests:128
/sys/block/sda/queue/nr_zones:0
/sys/block/sda/queue/optimal_io_size:0
/sys/block/sda/queue/physical_block_size:512
/sys/block/sda/queue/read_ahead_kb:128
/sys/block/sda/queue/rotational:0
/sys/block/sda/queue/rq_affinity:1
/sys/block/sda/queue/scheduler:[none] mq-deadline kyber bfq
/sys/block/sda/queue/stable_writes:0
/sys/block/sda/queue/virt_boundary_mask:0
/sys/block/sda/queue/wbt_lat_usec:2000
/sys/block/sda/queue/write_cache:write through
/sys/block/sda/queue/write_same_max_bytes:0
/sys/block/sda/queue/write_zeroes_max_bytes:33553920
/sys/block/sda/queue/zone_append_max_bytes:0
/sys/block/sda/queue/zone_write_granularity:0
/sys/block/sda/queue/zoned:none
The max io is max_sectors_kb = 512k, which is set in INQUIRY block limits
.
However, virtio-win-0.1.221 will send io which is 1MiB size, which violates the SCSI negotiation.
virtio-win-0.1.185 works well.
To Reproduce qemu + spdk + vhost_scsi, set the max scsi io size to 512k.
Expected behavior The max io size is 512k as what I set.
Screenshots
Host:
- All qemu version.
VM:
- windows:windows_server_2019_x64,
- virtio:virtio-win-0.1.221.iso
- Device:
-device vhost-user-scsi-pci,chardev=my-vhost-scsi-1,id=my-vhost-scsi-1,bus=pci.1,addr=0x2,bootindex=2,num_queues=2
The max_tx_length
should get from INQUIRY block limits
(0xb0) page code.
Is this the issue here?
487 ConfigInfo->MaximumTransferLength = ConfigInfo->NumberOfPhysicalBreaks * PAGE_SIZE;
488 ConfigInfo->NumberOfPhysicalBreaks++;
489 adaptExt->max_tx_length = ConfigInfo->MaximumTransferLength;
I don't know how to debug the windows driver, and don't know where the log prints. Thanks.
@x2c3z4 What is the qemu command line? Did you try using max_sectors to limit the number of physical breaks?
Best, Vadim.
Here is device:
-device vhost-user-scsi-pci,chardev=my-vhost-scsi-1,id=my-vhost-scsi-1,bus=pci.1,addr=0x2,bootindex=2,num_queues=2
@vrozenfe max_sectors default is 0xFFFF here.
@x2c3z4 Can you try setting max_sectors to 32 or 64 in the command line and see if helps to solve the problem?
Set max_sectors to 64 works well.
@vrozenfe
-device vhost-user-scsi-pci,chardev=my-vhost-scsi-0,id=my-vhost-scsi-0,bus=pci.1,addr=0x1,bootindex=1,num_queues=2,max_sectors=64
@x2c3z4
Then keep it as 64. Your physical backend has 126 segments limit. /sys/block/sda/queue/max_segments:126
So, technical you can increase the max_sectors value up to 125, but I'm not sure if it will give any performance improvement over 64.
Best, Vadim.
@vrozenfe
Then keep it as 64. Your physical backend has 126 segments limit.
SPDK vhost-user-scsi/blk backend set the 126 limit. Don't you think this is a bug?
I think the maximum IO size should be the minimum of max_sectors * 512 and SCSI INQUIRY block limits transfer size, so that there is better compatibility, and there is no need to modify the startup parameters of Qemu specially.
@x2c3z4 Yeah. Ideally qemu should be able to propagated the backend's limit to the guest. I remember, I saw some qemu or libvirt patches that intended to do it automatically. But I have no idea if they were merged upstream or not.
@vrozenfe I don't think so. Why not keep it consistent with the linux kernel driver and comply with the SCSI protocol?
Let' check the spec
5.6.4 Device configuration layout
All fields of this configuration are always available.
struct virtio_scsi_config {
le32 num_queues;
le32 seg_max;
le32 max_sectors;
le32 cmd_per_lun;
le32 event_info_size;
le32 sense_size;
le32 cdb_size;
le16 max_channel;
le16 max_target;
le32 max_lun;
};
num_queues
is the total number of request virtqueues exposed by the device. The driver MAY use only one request queue, or it can use more to achieve better performance.
seg_max
is the maximum number of segments that can be in a command. A bidirectional command can include seg_max input segments and seg_max output segments.
max_sectors
is a hint to the driver about the maximum transfer size to use.
max_sectors is just a hint, not a MUST.
@x2c3z4
I don't insist :)
After all, for Windows guests if max_sectors is equal 0xFFFF or 0 then the number of maximum physical breaks can be adjusted by the Registry parameter "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\vioscsi\Parameters\Device\PhysicalBreaks"
The driver by itself has no clue of the backand capabilities, and implementing some sort of request splitting mechanism to make the drive capable handling any random transfer limits can be a kind of nightmare.