[Bug]: When collection is loaded, i drop one partition of the collection from attu.
Is there an existing issue for this?
- [X] I have searched the existing issues
Environment
- Milvus version:2.0.2
- Deployment mode(standalone or cluster):cluster
- SDK version(e.g. pymilvus v2.0.0rc2):
- OS(Ubuntu or CentOS):
- CPU/Memory:
- GPU:
- Others:
Current Behavior
When collection is loaded, i drop one partition of the collection from attu. But i found that the memory of querynode didn't reduce and the number of vector didn't change from attu.
Expected Behavior
No response
Steps To Reproduce
No response
Milvus Log
No response
Anything else?
No response
If collection.load() is called at the first, I don't think we are able to release one of collection now. There is a limitation in Milvus 2.x now as introduced here @shanghaikid could you please double check whether Attu need some updates here for this limiation?
/assign @shanghaikid /unassign
i found i can still query the vector data of the deleted partition ...
If collection.load() is called at the first, I don't think we are able to release one of collection now. There is a limitation in Milvus 2.x now as introduced here @shanghaikid could you please double check whether Attu need some updates here for this limiation?
So what's your suggestion? If the collection is loaded, should we make the delete partition button disabled? But in this document, there is no this rule.
My business scene is that I only need latest 30d vector data, because milvus doesn't ttl at current stage.
When i reload the collection, i found i can still query or search the vector data of deleted partition.
@yanliang567
@shanghaikid @WangErXiao seems that we no longer support release partition when collection is loaded,
so if i want to drop partition, I must first unload collection then drop partition?
that's a good question. @jingkl could you please take a look at this issue? I think we need some test and define the behavior here. /assign @jingkl
/unassign @WangErXiao @shanghaikid
As we introduced the partition load-release limitation in 2.0, drop a partition in this case would NOT release the partition now. check the code here: https://github.com/milvus-io/milvus/blob/11fa3e24dd09cf4bc7cf3ad76be7df0bc8a3c3a1/internal/rootcoord/task.go#L684
So will support release partition when drop partition in 2.1 ?
So will support release partition when drop partition in 2.1 ?
I'm afraid not. :(
I found that the deleted partition data disappeared after a period time. But the entity number of collection don't change. Is it related to compaction?
/assign @congqixia could you please help on investigating it
OK, I shall check the current implementation when drop a partition which is loaded.
Here are some behavior for drop partition when the collection is loaded:
- As @yanliang567 mentioned, the Partition is not automatically released if it's loaded
- The deleted partition data will be unable to search in a period of time since the proxy meta will be invalidated
The quick workaround is to release the collection load it again. /assign @WangErXiao
Here are some behavior for drop partition when the collection is loaded:
- As @yanliang567 mentioned, the Partition is not automatically released if it's loaded
- The deleted partition data will be unable to search in a period of time since the proxy meta will be invalidated
The quick workaround is to release the collection load it again. /assign @WangErXiao
I thought the dropped segment is not handled in datacoord so num entities is still wrong
related with #17648
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.
partition related issue, will be fixed later after 2.2
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.