Duo Zhang
Duo Zhang
I think the solution is to increase the timeout so we do not timeout here, once timeout, just break and call the cleanup of replication quota may introduce data race...
I think we also need to add this call in other places where we return earlier in AssignRegionHandler.process?
I prefer we change the config name to something like 'compaction.skip-merging-compact', to indicate that this config is used to skip the real compaction operation. As even if we set this...
You can also send a discuss email to dev list, to see if there are better names for this config, since I'm not a native English speaker...
Which WAL implementation do you use? We should be able to deal with datanode restart. The data which has been hflushed should be OK, and once there is a write...
> We're not setting `hbase.wal.provider`, so in theory we ought to be using `AsyncFSWAL`, but the below stacktrace suggests we're using `FSHLog`, which I can't explain. > > The problem...
The design doc looks good. Skimmed the code, seems we put row cache into block cache? Minding explaining more on why we choose to use block cache to implement row...
@ndimiduk @bbeaudreault Mind taking a look at this documentation change about the connection uri changes in replication? Thanks.
The actual work is done via HBASE-28425, the most challenge thing is that, we changed the cluster connection implementation on branch-3+, so the actual code change in ReplicationEndpoint implementation will...
They just bumped to 1.1.0? We can follow the same pattern so it will not introduce too many conflicts?