Doesn't prioritize requests and doesn't have fairness with LVM/DeviceMapper
Using a simple LVM setup without nothing fancy, just pvcreate, vgcreate and simple lvcreate, no encryption, no RAID, just simple normal LVM, idle priority tasks just use the disk evenly with normal priority ones, high priority tasks don't get any boost, etc.
At least with kernel 5.4...
Thanks for reporting this problem. Unfortunately, given my very limited time, it may take a lot before I'll reproduce and work on it. In contrast, since you already have the problem reproduced, it would be relatively easy for you to feed me with the information I need. If you are willing to help, just do this:
- put in a script the commands with which you reproduce the problem, and show me the script
- build and install the current dev version of bfq (dev-bfq-on-5.6)
- reproduce the problem with the current dev version of bfq
- show me the numbers that tell you that bandwidth is not divided as expected
- after confirming the problem, download the S benchmark suite, and use the script utilities/tracing.sh contained there, to trace events (simple instructions in the script itself)
- share the block trace that you get
I'll take over from there, and maybe I'll ask you for some more trace while fixing the problem.
Thanks, Paolo
Il giorno 29 gen 2021, alle ore 00:17, Iván Baldo [email protected] ha scritto:
Using a simple LVM setup without nothing fancy, just pvcreate, vgcreate and simple lvcreate, no encryption, no RAID, just simple normal LVM, idle priority tasks just use the disk evenly with normal priority ones, high priority tasks don't get any boost, etc.
At least with kernel 5.4...
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.
Hello.
Don't worry, we both are busy, I know how it is.
Now with Kernel 5.11 it's better, it works perfectly for reads, at least O_DIRECT ones in Ext4 with LVM + DM_Crypt.
Strangely, writes don't, idle writes consume the same bandwidth as normal ones with O_DIRECT.
In one window I run: while ionice -c3 dd if=/dev/zero of=delete1 status=progress oflag=direct bs=1M count=1000; do true; done
and in another window I run at the same time: while dd if=/dev/zero of=delete2 status=progress oflag=direct bs=1M count=1000; do true; done
It's easy to reproduce, just install Ubuntu Desktop 20.04.3 with its 5.11 kernel using encryption and LVM.
Thanks a lot for BFQ!!!
El vie, 29 de ene. de 2021 a la(s) 04:51, paolo-github ( @.***) escribió:
Thanks for reporting this problem. Unfortunately, given my very limited time, it may take a lot before I'll reproduce and work on it. In contrast, since you already have the problem reproduced, it would be relatively easy for you to feed me with the information I need. If you are willing to help, just do this:
- put in a script the commands with which you reproduce the problem, and show me the script
- build and install the current dev version of bfq (dev-bfq-on-5.6)
- reproduce the problem with the current dev version of bfq
- show me the numbers that tell you that bandwidth is not divided as expected
- after confirming the problem, download the S benchmark suite, and use the script utilities/tracing.sh contained there, to trace events (simple instructions in the script itself)
- share the block trace that you get
I'll take over from there, and maybe I'll ask you for some more trace while fixing the problem.
Thanks, Paolo
Il giorno 29 gen 2021, alle ore 00:17, Iván Baldo < @.***> ha scritto:
Using a simple LVM setup without nothing fancy, just pvcreate, vgcreate and simple lvcreate, no encryption, no RAID, just simple normal LVM, idle priority tasks just use the disk evenly with normal priority ones, high priority tasks don't get any boost, etc.
At least with kernel 5.4...
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/Algodev-github/bfq-mq/issues/2#issuecomment-769641133, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACKASAMWR2LNBTWZ7ORPOBLS4JSHHANCNFSM4WX47KJA .
-- Ivan Baldo - @.*** - http://ibaldo.codigolibre.net/ Freelance C++/PHP programmer and GNU/Linux systems administrator. The sky isn't the limit!
Now with kernel 5.15 I can't reproduce the issue anymore, thanks a lot!
Doh I was running the command wrongly, the one for idle was running like this:
while ionice -c3 dd if=/dev/zero of=delete1 status=progress oflag=direct;
bs=1M count=1000; do true; done
So the block size was at the ugly slow default of 512 bytes. Thankfully I was doing some other benchmarks and noticed this.
Reopening since it's still reproducible in 5.15 unfortunately.