public-cloud-roadmap
public-cloud-roadmap copied to clipboard
Block Storage - High Performance
As a user I want high performance Block Storage volumes available for my Public Cloud instances.
Would you have an ETA for high perf block storage?
what's the admissible iops for this new block storage volumes ?
Seen on GRA9
What is the official status on this ? Since it's available on GRA9 is it in beta ?
Edit: It also now shows up on the ovhcloud pricing
I decided to give the high-speed-gen2 class of volume a spin on b2-15 on GRA9 and I consistently get around 6K IOPS, far from the max of 20K shown on the pricing above.
$ fio --name=rand-write --ioengine=libaio --iodepth=32 --rw=randwrite --invalidate=1 --bsrange=4k:4k,4k:4k --size=512m --runtime=120 --time_based --do_verify=1 --direct=1 --group_reporting --numjobs=4 --filename=/dev/sdc
rand-write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
...
fio-3.12
Starting 4 processes
Jobs: 4 (f=4): [w(4)][100.0%][w=23.4MiB/s][w=6000 IOPS][eta 00m:00s]
rand-write: (groupid=0, jobs=4): err= 0: pid=4568: Mon Sep 13 16:28:59 2021
write: IOPS=6004, BW=23.5MiB/s (24.6MB/s)(2815MiB/120023msec); 0 zone resets
slat (usec): min=4, max=4546, avg=21.18, stdev=13.60
clat (usec): min=1097, max=42096, avg=21286.16, stdev=937.63
lat (usec): min=1117, max=42108, avg=21308.87, stdev=937.66
clat percentiles (usec):
| 1.00th=[20055], 5.00th=[20317], 10.00th=[20579], 20.00th=[20841],
| 30.00th=[21103], 40.00th=[21365], 50.00th=[21365], 60.00th=[21365],
| 70.00th=[21365], 80.00th=[21627], 90.00th=[22152], 95.00th=[22152],
| 99.00th=[22414], 99.50th=[22676], 99.90th=[27395], 99.95th=[32113],
| 99.99th=[38011]
bw ( KiB/s): min= 5856, max= 7200, per=25.00%, avg=6004.31, stdev=84.05, samples=960
iops : min= 1464, max= 1800, avg=1501.06, stdev=21.01, samples=960
lat (msec) : 2=0.01%, 4=0.06%, 10=0.06%, 20=0.52%, 50=99.35%
cpu : usr=2.25%, sys=4.38%, ctx=629120, majf=0, minf=35
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=0,720722,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: bw=23.5MiB/s (24.6MB/s), 23.5MiB/s-23.5MiB/s (24.6MB/s-24.6MB/s), io=2815MiB (2952MB), run=120023-120023msec
Disk stats (read/write):
sdc: ios=65/719924, merge=0/0, ticks=56/15320424, in_queue=15293352, util=99.97%
$ fio --name=rand-write --ioengine=libaio --iodepth=32 --rw=randwrite --invalidate=1 --bsrange=4k:4k,4k:4k --size=512m --runtime=120 --time_based --do_verify=1 --direct=1 --group_reporting --numjobs=4 --filename=/dev/sdc
rand-write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
...
fio-3.12
Starting 4 processes
Jobs: 4 (f=4): [w(4)][100.0%][w=23.4MiB/s][w=6002 IOPS][eta 00m:00s]
rand-write: (groupid=0, jobs=4): err= 0: pid=4618: Mon Sep 13 16:47:21 2021
write: IOPS=6004, BW=23.5MiB/s (24.6MB/s)(2815MiB/120023msec); 0 zone resets
slat (usec): min=3, max=52519, avg=113.70, stdev=726.26
clat (usec): min=747, max=73389, avg=21194.12, stdev=2392.99
lat (usec): min=762, max=73412, avg=21309.34, stdev=2416.51
clat percentiles (usec):
| 1.00th=[14615], 5.00th=[16712], 10.00th=[20317], 20.00th=[20841],
| 30.00th=[21103], 40.00th=[21103], 50.00th=[21365], 60.00th=[21365],
| 70.00th=[21365], 80.00th=[21890], 90.00th=[22152], 95.00th=[22414],
| 99.00th=[27657], 99.50th=[32113], 99.90th=[38011], 99.95th=[55313],
| 99.99th=[66847]
bw ( KiB/s): min= 5192, max= 7424, per=25.00%, avg=6003.94, stdev=191.74, samples=960
iops : min= 1298, max= 1856, avg=1500.96, stdev=47.93, samples=960
lat (usec) : 750=0.01%, 1000=0.01%
lat (msec) : 2=0.02%, 4=0.07%, 10=0.20%, 20=5.92%, 50=93.73%
lat (msec) : 100=0.06%
cpu : usr=2.08%, sys=3.75%, ctx=534121, majf=0, minf=36
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=0,720717,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: bw=23.5MiB/s (24.6MB/s), 23.5MiB/s-23.5MiB/s (24.6MB/s-24.6MB/s), io=2815MiB (2952MB), run=120023-120023msec
Disk stats (read/write):
sdc: ios=0/719927, merge=0/0, ticks=0/14136052, in_queue=14121700, util=100.00%
$ fio --name=rand-write --ioengine=libaio --iodepth=32 --rw=randwrite --invalidate=1 --bsrange=4k:4k,4k:4k --size=512m --runtime=120 --time_based --do_verify=1 --direct=1 --group_reporting --numjobs=4 --filename=/dev/sdc
rand-write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
...
fio-3.12
Starting 4 processes
Jobs: 4 (f=4): [w(4)][100.0%][w=23.5MiB/s][w=6004 IOPS][eta 00m:00s]
rand-write: (groupid=0, jobs=4): err= 0: pid=4646: Mon Sep 13 16:54:39 2021
write: IOPS=6004, BW=23.5MiB/s (24.6MB/s)(2815MiB/120022msec); 0 zone resets
slat (usec): min=4, max=54046, avg=143.77, stdev=819.75
clat (usec): min=1991, max=76944, avg=21163.85, stdev=2797.59
lat (usec): min=2006, max=76965, avg=21309.17, stdev=2825.26
clat percentiles (usec):
| 1.00th=[11207], 5.00th=[16188], 10.00th=[20317], 20.00th=[20841],
| 30.00th=[21103], 40.00th=[21103], 50.00th=[21365], 60.00th=[21365],
| 70.00th=[21365], 80.00th=[21627], 90.00th=[22152], 95.00th=[22414],
| 99.00th=[31327], 99.50th=[32375], 99.90th=[45351], 99.95th=[60031],
| 99.99th=[69731]
bw ( KiB/s): min= 4856, max= 7232, per=25.00%, avg=6004.23, stdev=255.53, samples=960
iops : min= 1214, max= 1808, avg=1501.04, stdev=63.88, samples=960
lat (msec) : 2=0.01%, 4=0.11%, 10=0.22%, 20=7.64%, 50=91.95%
lat (msec) : 100=0.08%
cpu : usr=2.01%, sys=4.10%, ctx=523546, majf=0, minf=38
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=0,720719,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: bw=23.5MiB/s (24.6MB/s), 23.5MiB/s-23.5MiB/s (24.6MB/s-24.6MB/s), io=2815MiB (2952MB), run=120022-120022msec
Disk stats (read/write):
sdc: ios=0/719921, merge=0/0, ticks=0/13784141, in_queue=13763700, util=100.00%
What is the official status on this ? Since it's available on GRA9 is it in beta ?
Edit: It also now shows up on the ovhcloud pricing
Yes it seems to be on the princing page, is it stable ?
Hi all, here is the official status on this. New Block Storage Volume type is available in production in GRA9 region : high-speed-gen2 Allocation of the performance is done as follow: 30 IOPS allocated per GB within the limit of 20000 IOPS per Volume 0,5 MB/s allocated per GB within the limit of 1GB/s per Volume Price per GB is 0,08 € HT/month/GB
We understand many customers want to keep the old volume type available as they want average performance with small volumes. We currently don’t target to replace high-speed by high-speed-gen2.
We are working on the Website update to to make this crystal clear for everyone.
That is great! Would you have an ETA for other zones?
Thank you for the update. The test I published above was on 200GB volumes so it now makes sense why I only got up to 6K IOPS (30 * 200 = 6K). I'm also interested in knowing if those volumes are coming in other regions.
When it will be available on all zones ? I don't see it available in BHS ?
@JacquesMrz:
(...) Allocation of the performance is done as follow: 30 IOPS allocated per GB within the limit of 20000 IOPS per Volume 0,5 MB/s allocated per GB within the limit of 1GB/s per Volume Price per GB is 0,08 € HT/month/GB (...) We are working on the Website update to to make this crystal clear for everyone.
I cannot find this information on the website, neither can I find the same for the other volume types. Where can I find that?
thx
@JacquesMrz any information when it will be available in GRA7 or SBG5? Kubernetes Clusters are unable to profit from the new storage atm as they cant be spawned in GRA9
This issue was moved to "Released" by @JacquesMrz but it does not appear to be available on all locations. At least I can't find in on DE1:
+--------------------------------------+------------+-----------+
| ID | Name | Is Public |
+--------------------------------------+------------+-----------+
| 00fc334f-c1de-4705-bb88-0a58271b38c8 | classic | True |
| 8ab444e9-7206-4801-91bf-6e732e697de6 | high-speed | True |
+--------------------------------------+------------+-----------+
What is the avaialability status of high-speed-gen2
per locations?
On our end it is available on GRA9 and SBG7 only
Now that managed kubernetes is available in GRA9, will we see this block storage option available soon in kubernetes? Thanks
In the website for Italian language, the details of the IO is wrong. It is: "330 IOPS/GB nel limite di 20.000 IOPS per volume" Should be: "30 IOPS/GB nel limite di 20.000 IOPS per volume" I think it's just a typo, should you take care of this? https://www.ovhcloud.com/it/public-cloud/block-storage/
Hi @frabe1579 and thanks for the spot, indeed the right value is 30. I raised this to our marketing colleagues so that it is corrected quickly.
Thank you @mhurtrel .
Hi @mhurtrel
Any plan for higher iops volumes ? 20k max + the iops/GB restriction are quite limiting DB & more general data-intensive usages
Ho @cambierr, copy that. We are currently working on a NVMe over fabric Block Storage solution, which should answer your needs. If everything goes well, BETA will start in the coming months.
S3 High Perf will be available at GRA next week !
Hi @JacquesMrz,
Do you have an ETA when it will be available in SBG5 ? Also, is it possible to keep this table updated ? It's very useful to follow what is available where ... but it seems it was not updated :(
Thanks
Still not available outside of GRA, still no new information? Is this still even in the plans?
You can follow availability here https://www.ovhcloud.com/en-ie/public-cloud/regions-availability/ When the service is deployed in a new region we update it. The next one to be opened is WAW1.
Hi, looks like "High speed Gen2 block storage" is available in Gravelines, fine, but is it available inside managed kubernetes ?
kubectl get storageclass
only report csi-cinder-high-speed
and its displayed as "Gen 1" in the OVH web UI.
any way to use gen2 in a kubernetes Peristent volume ?
@revolunet If the region supports it you can add a storage class yourself :
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-cinder-high-speed-gen2
parameters:
availability: nova
fsType: ext4
type: high-speed-gen2
allowVolumeExpansion: true
provisioner: cinder.csi.openstack.org
reclaimPolicy: Delete
volumeBindingMode: Immediate
Thanks @rgdev ! For some reason, when provisionning a 10Gi
PVC, it looks very slow
ex : time dd if=/dev/zero of=/data/test2.img bs=1G count=1
1+0 records in
1+0 records out
real 2m 57.99s
user 0m 0.00s
sys 0m 3.10s
IOPS scales up based on capacity, try instancing a 100Gi or 300Gi volume to see how it goes
Thanks; some tests :
PVC 300Go high-speed-gen2 :
# time dd if=/dev/zero of=/data/test2.img bs=1G count=1
real 0m 7.10s
sys 0m 2.83s
PVC 1000Go high-speed-gen2 :
# time dd if=/dev/zero of=/data/test2.img bs=1G count=1
real 0m 4.94s
sys 0m 2.47s
(tested on b2-7 instances)
When will it be available in us?