milvus
milvus copied to clipboard
[Bug]: The data of released partitions may be read
Is there an existing issue for this?
- [X] I have searched the existing issues
Environment
- Milvus version:
- Deployment mode(standalone or cluster):
- SDK version(e.g. pymilvus v2.0.0rc2):
- OS(Ubuntu or CentOS):
- CPU/Memory:
- GPU:
- Others:
Current Behavior
From QueryCoordV2, The QueryNode has no idea which partitions are released, this results in the QueryNode may load growing segments of released partitions, and Query/Search would see these data.
Note: after the segments are flushed, the segment will be released, because QueryCoord can find the sealed segments of released partitions
Expected Behavior
No response
Steps To Reproduce
No response
Milvus Log
No response
Anything else?
No response
/assign @yah01 /unassign
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.
/reopen
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.
@czs007 this is a partition related issue that you may take care.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.
keep it
/assign @bigsheeper
/reopen
@yah01: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.
keep it avtive
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.
keep it active
@bigsheeper Is this solved with the new dynamic load partition?
@bigsheeper Is this solved with the new dynamic load partition?
Yes, I believe it can be solved. Perhaps we should add a test case for this scenario when testing dynamiclly loading partitions @binbinlv
/assign @binbinlv
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.