milvus
milvus copied to clipboard
[Bug]: [laion1b-test] The dropped collection has never been GC
Is there an existing issue for this?
- [X] I have searched the existing issues
Environment
- Milvus version: cardinal-milvus-io-2.3-3c90475-20240311
- Deployment mode(standalone or cluster): cluster
- MQ type(rocksmq, pulsar or kafka): pulsar
- SDK version(e.g. pymilvus v2.0.0rc2):
- OS(Ubuntu or CentOS):
- CPU/Memory:
- GPU:
- Others:
Current Behavior
- the dropped collection has not been GC, the collectionIDs;
446771525888510903
447180040934991941
447619508238296599
birdwatcher show collections:
Expected Behavior
No response
Steps To Reproduce
No response
Milvus Log
No response
Anything else?
No response
Block at GcConfirm
[2024/03/15 00:48:06.313 +00:00] [INFO] [rootcoord/broker.go:322] ["received gc_confirm response"] [collection=446771525888510903] [partition=-1] [finished=false]
From birdwatcher, there's no vchannel channel-watch.
DropVChannel Operation of Datacoord is called.
Milvus(laion1b-test-2) > show channel-watch --collection 446771525888510903
--- Total Channels: 0
All segment of 446771525888510903 should be marked as Drop, but there are 10k+ segments in collection without marked as Flush.
Milvus(laion1b-test-2) > show segment --collection 446771525888510903 --partition 446771525888510904 --segment 446771525945063508
SegmentID: 446771525945063508 State: Flushed, Row Count:394652
--- Growing: 0, Sealed: 0, Flushed: 1
--- Total Segments: 1, row count: 394652
etcdctl --endpoints=localhost:61030 get --prefix laion1b-test-2/meta/datacoord-meta/s/446771525888510903 --count-only=true --write-out=fields
"ClusterID" : 1987965181132763027
"MemberID" : 12755051181896445644
"Revision" : 54840053
"RaftTerm" : 7
"More" : false
"Count" : 14109
methods DropSegmentsOfChannel , UpdateDropChannelSegmentInfo, UpdateSegmentsInfo of SegmentManager and meta on datacoord doesn't promise read-and-write consistent.
Another collection that cannot be dropped is block at channel watch:
birdwatcher is showing:
=============================
key: laion1b-test-2/meta/channelwatch/2193/laion1b-test-2-rootcoord-dml_2_447619508238296599v0
Channel Name:laion1b-test-2-rootcoord-dml_2_447619508238296599v0 WatchState: ToWatch
Channel Watch start from: 2024-03-21 15:45:23 +0800, timeout at: 1970-01-01 08:00:00 +0800
Start Position ID: [8 162 52 16 170 209 1 24 0 32 0], time: 2024-02-21 15:23:03.161 +0800
Unflushed segments: []
Flushed segments: []
Dropped segments: []
=============================
key: laion1b-test-2/meta/channelwatch/2193/laion1b-test-2-rootcoord-dml_3_447619508238296599v1
Channel Name:laion1b-test-2-rootcoord-dml_3_447619508238296599v1 WatchState: ToWatch
Channel Watch start from: 2024-03-21 15:45:32 +0800, timeout at: 1970-01-01 08:00:00 +0800
Start Position ID: [8 163 52 16 168 205 1 24 0 32 0], time: 2024-02-21 15:22:03.161 +0800
Unflushed segments: []
Flushed segments: []
Dropped segments: []
=============================
key: laion1b-test-2/meta/channelwatch/2193/laion1b-test-2-rootcoord-dml_6_447180040934991941v0
Channel Name:laion1b-test-2-rootcoord-dml_6_447180040934991941v0 WatchState: ToWatch
Channel Watch start from: 2024-03-21 15:49:21 +0800, timeout at: 1970-01-01 08:00:00 +0800
Start Position ID: [8 182 38 16 225 135 2 24 0 32 0], time: 2024-02-07 16:07:13.776 +0800
Unflushed segments: []
Flushed segments: []
Dropped segments: []
datanode is block at no such key, recover failure.
[2024/03/21 07:49:20.043 +00:00] [ERROR] [retry/retry.go:46] ["retry func failed"] ["retry time"=0] [error="NoSuchKey(key=files/stats_log/447619508238296599/447619508238296619/447859087914262961/100/447859087914263588)"] [stack="github.com/milvus-io/milvus/pkg/util/retry.Do\n\t/go/src/github.com/milvus-io/milvus/pkg/util/retry/retry.go:46\ngithub.com/milvus-io/milvus/internal/storage.(*RemoteChunkManager).Read\n\t/go/src/github.com/milvus-io/milvus/internal/storage/remote_chunk_manager.go:166\ngithub.com/milvus-io/milvus/internal/storage.(*RemoteChunkManager).MultiRead\n\t/go/src/github.com/milvus-io/milvus/internal/storage/remote_chunk_manager.go:222\ngithub.com/milvus-io/milvus/internal/datanode.(*ChannelMeta).loadStats\n\t/go/src/github.com/milvus-io/milvus/internal/datanode/channel_meta.go:433\ngithub.com/milvus-io/milvus/internal/datanode.(*ChannelMeta).initPKstats\n\t/go/src/github.com/milvus-io/milvus/internal/datanode/channel_meta.go:475\ngithub.com/milvus-io/milvus/internal/datanode.(*ChannelMeta).InitPKstats\n\t/go/src/github.com/milvus-io/milvus/internal/datanode/channel_meta.go:331\ngithub.com/milvus-io/milvus/internal/datanode.(*ChannelMeta).addSegment\n\t/go/src/github.com/milvus-io/milvus/internal/datanode/channel_meta.go:242\ngithub.com/milvus-io/milvus/internal/datanode.getChannelWithEtcdTickler.func2\n\t/go/src/github.com/milvus-io/milvus/internal/datanode/data_sync_service.go:268\ngithub.com/milvus-io/milvus/pkg/util/conc.(*Pool[...]).Submit.func1\n\t/go/src/github.com/milvus-io/milvus/pkg/util/conc/pool.go:81\ngithub.com/panjf2000/ants/v2.(*goWorker).run.func1\n\t/go/pkg/mod/github.com/panjf2000/ants/[email protected]/worker.go:67"]-- | --
[2024/03/21 07:49:20.044 +00:00] [WARN] [datanode/channel_meta.go:435] ["failed to load bloom filter files"] [segmentID=447859087914262961] [error="failed to read files/stats_log/447619508238296599/447619508238296619/447859087914262961/100/447859087914263588: attempt #0: NoSuchKey"]
[2024/03/21 07:54:18.202 +00:00] [WARN] [datanode/channel_meta.go:435] ["failed to load bloom filter files"] [segmentID=447543888697846583] [error="failed to read files/stats_log/447180040934991941/447180040934991978/447543888697846583/100/447543888697848586: attempt #0: NoSuchKey"]
@chyezh
channel watch should be canceled for dropped collection?
@chyezh
channel watch should be canceled for dropped collection?
Yes, but current milvus DropCollection is not state based implementation.
Channel watch is dropped when Datanode consume the DropCollection message which is generated by RootCoord GC.
Once the channel watch is not recoverable, the DropCollection is blocked forever.