MigeljanImeri
MigeljanImeri
> I think I have went through the discussion in #16591 thoroughly enough and I'm not seeing any attempt to tune the queues right. NVMe differ from SATA, they can...
> @MigeljanImeri thanks for the explanation; also, huge thanks for taking on perf tuning in the codebase (perhaps should have opened with that, sorry :D) > > how does the...
> > All the limits are used under the lock; the lock itself is the problem > > Haven't looked at the code honestly, since I covered everything I needed...
Changed vdev property name `queue_io` -> `scheduler`.
> > Currently, any queuing would be left to the device itself to do. > > That's fine if you know that you can submit all requests you may have...
> @MigeljanImeri if you can address my last round of feedback we can get this finally merged. Thanks for your patience! Just saw this, I will try to get to...
Sorry it took so long to get back to this. I have added inheritance to this property using `vdev_prop_get_inherited` to cache it in the appropriate places. Currently I check if...
> I guess it might have sense for some very fast NVMe's with "unlimited" queue depth and I/O rate, since others may suffer from I/O aggregation lost as a side...
> I am uncomfortable with this change. > > I agree that the `vq_lock` runs white-hot when the disks underneath are very very fast. I've spent many months trying to...
> My preference would be for a vdev property, since you could set it NVMe vdevs, but not spinning disks. This assumes we're 100% comfortable with this feature though. >...