clickhouse-operator icon indicating copy to clipboard operation
clickhouse-operator copied to clipboard

Single-replica shard restart updates remote_servers and leads to undesirable behaviour

Open cpg314 opened this issue 11 months ago • 2 comments

In a deployment with multiple shards and a single replica each, I seem to have observed the following:

  1. One of the shards has its pod restarting (in my case because Clickhouse segfaults).
  2. The operator changes the remote_servers configuration to remove that server. [*]
  3. Until the shard is re-added, INSERTs on distributed tables will distribute the data on shards differently (modulo the number of remaining shards), which can be undesirable when trying to enforce shard-locality of data for distributed joins. In the worst case, this can lead to local queries unexpectedly giving different results than local ones.

I have described the issue in more details there, where people suggested this was more likely an issue with the operator: https://github.com/ClickHouse/ClickHouse/issues/60219

The workaround I described there is to override the remote_servers configuration so that the shards are not removed. Instead of inserting to the "wrong" shard, the insert will fail.

[*] I have not been able to see the /etc/clickhouse-server/config.d/chop-generated-remote_servers.xml configuration change when I manually force a pod to restart by killing the process, but the fact the workaround seems to solve the issue hints that this is what happens.

Is the operator indeed removing servers with all replicas having pods in non-ready mode? If so, it would probably be a good idea to make this behavior optional: as long as the pod exists (even if it is currently restarting), the shard should not be removed from the remote_servers.

cpg314 avatar Mar 05 '24 15:03 cpg314

@cpg314 , this was done intentional. If pod is being recreated (could not be caused by ClickHouse restarts though), it is not resolvable in DNS. ClickHouse fails to work in this case: distributed queries fail, and even startup may fail, skip_unavailable_shards does not help.

From the other hand, we are currently using services for cluster configuration, so it is probably not so visible anymore. Adding an option is a good idea, thanks.

alex-zaitsev avatar Mar 10 '24 08:03 alex-zaitsev

Thanks for the details! Yes, in my use case, where I want to guarantee that data ends up on the right shard for distribued JOINs, having queries fail is better than them succeeding on the remain shards. In my client-side handling, I just retry these queries until the shard is back online.

cpg314 avatar Mar 10 '24 11:03 cpg314