Gleb Natapov
Gleb Natapov
The problem is not decommission specific. All topolopgy commands through raft will stuck without a quorum.
> This is why we need to store the node UP/DOWN status in Raft. If we store it we immediately know we have no majority and can reject the operation...
> A question about `abort_source`. > > `abort_source` used to communicate aborted state through `abort_requested_exception` -- the `abort_source::request_abort` creates an instance of it with no arguments and `abort_source` clients get...
Of course shutdown notification is delivered asynchronously vs test code.
> > @kostja - what's the next step here? > > @michoecho already wrote a three-line patch to this in February (see above). Any reason not to send it as...
Been stuck after shutting down the whole cluster and restarting one node with different number of shards is expected (it tries to contact the quorum to update its config). Segfault...
So I do not see any thread that crashed here. They are all just sleeping (which is expected for the described scenario). Do we have a mechanism that generates core...
I am confused now. The issue states: ``` Looks like coredump happened on node1 when it was started with new number of shards after whole cluster was stopped. Locally ,...
> Gleb pointed out in the past: > > ``` > if (kind == error_kind::DISCONNECT && _block_for == _target_count_for_cl) { > // if the error is because of a connection...
> > > Gleb pointed out in the past: > > > ``` > > > if (kind == error_kind::DISCONNECT && _block_for == _target_count_for_cl) { > > > // if...