Pavel Emelyanov
Pavel Emelyanov
Quoting some @avikivity e-mail from long ago: > In this scenario io_prioity_class would eventually become an internal type and all quality-of-service APIs would concentrate around scheduling_group. This simplifies the API...
As seen in scylladb/scylla#9505, on the RPC receive path there's a `pollable_fd_state_completion::complete_with` resolves a promise on which the reading fiber waits. This resolution queues at the tail of the run-queue...
In scylla issue #9906 there's such an io-properties.yaml file ``` disks: - mountpoint: /var/lib/scylla read_iops: 32004 read_bandwidth: 262607760 write_iops: 34608 write_bandwidth: 265524816 ``` however, the drive was created with default...
This generates a yet to be studued effect on the users that most likely run in some other sched group with its own shares/vruntime/etc. Affects #989
If running the rate-limited IO workload, acheiving high requests-per-second rate is impossible if sleeping between submissions -- reactor consumes more time that it should pass between submissions. Increasing the number...
Running rated read workload vs unbound write workload produces result like this: ``` througput(kbs) iops lat95 queued executed (microseconds) w: 226349 3536 118357.4 26726.8 397.4 r: 500465 125116 2134.4 1182.7...
Right now scylla (and io-tester too) configure different "workloads" with equal shares set on both -- sched group and IO class. This, however, sometimes results in a hard-to-predict interaction. In...
E.g. testing the `io_tester` with this config: ``` - data_size: 8192MB name: reads shard_info: parallelism: 16 reqsize: 1kB rps: 251 shares: 100000 shards: all type: randread ``` resulits in this...
As seen in [several](https://github.com/scylladb/scylla-enterprise/issues/2312) [issues](https://github.com/scylladb/scylla-enterprise/issues/2313) already, gossiper manages lots of data. The [fattest](https://github.com/scylladb/scylla-enterprise/issues/2312#issuecomment-1184616573) exchanged state is cache hitrates. Tossing them through gossiper affects the gossiper itself in a bad way....
Pulled from criu.org/Todo When dumping mountpoints we explicitly check the filesystem mounted. The thing is -- not all filesystems can be just ignored on dump. E.g. FUSE mount involves a...