kvrocks icon indicating copy to clipboard operation
kvrocks copied to clipboard

After the cluster expansion completes the slot migration, the total dbsize value of all shard master nodes is greater than the value before the expansion

Open 903174293 opened this issue 11 months ago • 6 comments

Search before asking

  • [X] I had searched in the issues and found no similar issues.

Version

v2.9.0

Minimal reproduce step

1、kvrocks version:v2.9.0 2、init env:3 shard,One master and one slave,Single Node Specifications:8C16G、disk:500GB 3、After expansion:5 shard,One master and one slave,Single Node Specifications:8C16G、disk:500GB 4、Expansion method:Calling the controller interface 5、Phenomenon: Before expansion: the total dbsize value of all shard master nodes:12484 After expansion:the total dbsize value of all shard master nodes:12489

What did you expect to see?

The total dbsize of all master nodes is equal before and after expansion

What did you see instead?

Before expansion: the total dbsize value of all shard master nodes:12484 After expansion:the total dbsize value of all shard master nodes:12489

Anything Else?

Migration interface for controllers: POST /api/v1/namespaces/clusters/{cluster name}/migrate { "target": 4, "slot": 1000, "slot_only": false }

Are you willing to submit a PR?

  • [X] I'm willing to submit a PR!

903174293 avatar Jan 08 '25 02:01 903174293

I found the same issue in my tests. There are 3 nodes in my cluster before migration, after migration, there are 5 nodes in my cluster. Here is my test steps:

  1. kvrocks version 2.9.0
  2. before migration: use scan command to check all the master nodes, there are 167005770 keys;
  3. set migrate-type =raw-key-value in kvrocks.conf
  4. call the kv controller migrate API: {{host}}/api/v1/namespaces/{{namespace}}/clusters/{{cluster}}/migrate
  5. call the kv controller API to check the migration is done: http://{{host}}/api/v1/namespaces/akv/clusters/kvrocks_29_test_migrate
  6. after migration: use scan command to check all the master nodes, there are 167016655 keys;
  7. there are 10885 keys depulicated after migration done.

feel free to contact me for more test details

hongleis avatar Jan 13 '25 02:01 hongleis

@git-hulk

hongleis avatar Jan 13 '25 02:01 hongleis

@903174293 @hongleis Thanks for your report. I have missed this issue in the past few days. I want to know:

  • Are all of those duplicated keys still living in the source node?
  • Do those duplicated keys have any new writes while migrating?

git-hulk avatar Jan 13 '25 04:01 git-hulk

@903174293 @hongleis Thanks for your report. I have missed this issue in the past few days. I want to know:

  • Are all of those duplicated keys still living in the source node?
  • Do those duplicated keys have any new writes while migrating?

yes,but it cannot be found using get, but can be found using keys*; no

903174293 avatar Jan 13 '25 08:01 903174293

@git-hulk for the duplicated key, it located in both source node and destionation node. when we use the redis get command ,it will go to the destionation node to search. But I connect to the source node(not cluster mode) and use redis scan command to search, I can find the deplicated key. In a word, it did not affect the first migration, but when we migration mulitiple times , i am afraid the depulicated keys which not cleaned may be the dirty values

hongleis avatar Jan 17 '25 01:01 hongleis

@hongleis Got your point, thanks for both of your information. Will take a look at this issue when I get time.

git-hulk avatar Jan 17 '25 03:01 git-hulk