Xinyi7
Xinyi7
@yanliang567 thank you so much for your reply. I can try on Monday. the thing is i don't see any memory/cpu pressure from our charts for data node and the...
we have 1.6b data which really shouldn't require 96gb memory for streaming node if my understanding is correct, since streaming node only hosts growing segments, which shouldn;t be larger than...
I also got the following errors when I tried to query on two different tried(streaming node hasn't crashed): > 2025-11-15 18:57:00,608 [ERROR][handler]: RPC error: [query], , (decorators.py:140) > 2025-11-15 18:57:00,610...
hi @xiaofan-luan thanks for your comment. what i don't understand is why we would get `no available shard leaders` when trying to query the system. (streaming node hasn't crashed) Do...
oh one more log i found: i see this multiple lines of this error in our log > [2025/11/14 23:57:42.427 +00:00] [WARN] [broker/datacoord.go:140] ["failed to SaveBinlogPaths"] [error="segment not found[segment=462201070192852700]"] --...
@xiaofan-luan i see the same issue after i increased the memory for data node to 16 gb also, this is beginning to feel more and more like a memory leak...
it is 96gb with our metrics. checking. wierd sorry but are you sure the log doesn't refer to the memory threshold of query node? which is 32gb
the memory is definitely more than 32gb. i can see some of the streaming nodes use more than 32 gb of memory already
we saw it at 80+ gb yesterday
is there a metrics i can check the segment size? i don't see them when i did `utility.get_query_segment_info(collection_name)` fyi, we didn't set the segment size, so it should still be...