Austen
Austen
> @kvoli @nvanbenschoten I'm also up for helping with a change that meaningfully changes the frequency of gossiping "gossip-clients" keys given that all they're used for is the periodic logging...
> Testing on that 95 node cluster that had been fairly bogged down by gossip, deploying the change made a huge difference in the `gossip_bytes_sent/received` and `gossip_infos_sent/received` metrics as well...
> Do we know where this function is being called from? Is it `replicateQueue.process` -> `replicateQueue.shedLease`? If so, could this have been caused by 79886bb? We now call `replicateQueue.shedLease` even...
TYFTR bors r=andrewbaptist
I'll take a look at this.
The three QPS load splits due to the workload not being perfectly sequential between concurrent kv worker threads with different request latency. The split key has around 100 samples in...
I'm confused why these series of tests started failing recently (
This hasn't reproduced over 2 hours. I'll try a different linked test.
> It probably published liveness a few ms earlier, but the gossip delay could explain this miss. It should be consistent since it is scanning the liveness range, as opposed...
This appears like a testing issue afaict, where there's a race between the test cluster starting the upgrade and a node joining the cluster.