milvus
milvus copied to clipboard
[Bug]: [benchmark][cluster]Milvus Querynode memory of build 0714 increased 50%+ than build 0714
Is there an existing issue for this?
- [X] I have searched the existing issues
Environment
- Milvus version:
- Deployment mode(standalone or cluster):cluster
- SDK version(e.g. pymilvus v2.0.0rc2):pymilvus2.1.0dev98
- OS(Ubuntu or CentOS):
- CPU/Memory:
- GPU:
- Others:
Current Behavior
milvus 2.1.0-20220708-507518f8 pymilvus 2.1.0dev98
server-instance fouram-t95bf-1 server-configmap server-cluster-8c64m-kafka client-configmap client-random-locust-100m-ddl-r8-w2
fouram-t95bf-1-etcd-0 1/1 Running 0 19h 10.104.4.228 4am-node11 <none> <none>
fouram-t95bf-1-etcd-1 1/1 Running 0 19h 10.104.6.120 4am-node13 <none> <none>
fouram-t95bf-1-etcd-2 1/1 Running 0 19h 10.104.5.139 4am-node12 <none> <none>
fouram-t95bf-1-kafka-0 1/1 Running 1 (19h ago) 19h 10.104.6.117 4am-node13 <none> <none>
fouram-t95bf-1-kafka-1 1/1 Running 1 (19h ago) 19h 10.104.9.151 4am-node14 <none> <none>
fouram-t95bf-1-kafka-2 1/1 Running 1 (19h ago) 19h 10.104.4.227 4am-node11 <none> <none>
fouram-t95bf-1-milvus-datacoord-7fcf887fd5-qcwb8 1/1 Running 1 (19h ago) 19h 10.104.1.26 4am-node10 <none> <none>
fouram-t95bf-1-milvus-datanode-677cd94b76-jv2zb 1/1 Running 1 (19h ago) 19h 10.104.1.24 4am-node10 <none> <none>
fouram-t95bf-1-milvus-indexcoord-7f5976947d-7lv56 1/1 Running 1 (19h ago) 19h 10.104.6.113 4am-node13 <none> <none>
fouram-t95bf-1-milvus-indexnode-655c9f99cf-ngznw 1/1 Running 0 19h 10.104.4.225 4am-node11 <none> <none>
fouram-t95bf-1-milvus-proxy-78ccfb74c6-jbnbf 1/1 Running 0 19h 10.104.6.114 4am-node13 <none> <none>
fouram-t95bf-1-milvus-querycoord-574f5f596f-dlnkz 1/1 Running 1 (19h ago) 19h 10.104.1.25 4am-node10 <none> <none>
fouram-t95bf-1-milvus-querynode-7ccc89f9cc-9zncg 1/1 Running 0 19h 10.104.6.112 4am-node13 <none> <none>
fouram-t95bf-1-milvus-rootcoord-5cd4bcd889-5d8f7 1/1 Running 0 19h 10.104.4.224 4am-node11 <none> <none>
fouram-t95bf-1-minio-0 1/1 Running 0 19h 10.104.4.230 4am-node11 <none> <none>
fouram-t95bf-1-minio-1 1/1 Running 0 19h 10.104.6.119 4am-node13 <none> <none>
fouram-t95bf-1-minio-2 1/1 Running 0 19h 10.104.5.138 4am-node12 <none> <none>
fouram-t95bf-1-minio-3 1/1 Running 0 19h 10.104.9.153 4am-node14 <none> <none>
fouram-t95bf-1-zookeeper-0 1/1 Running 0 19h 10.104.6.118 4am-node13 <none> <none>
fouram-t95bf-1-zookeeper-1 1/1 Running 0 19h 10.104.5.136 4am-node12 <none> <none>
fouram-t95bf-1-zookeeper-2 1/1 Running 0 19h 10.104.9.150 4am-node14 <none> <none>
querynode memory:
milvus : 2.1.0-20220714-e8ac3664 pymilvus 2.1.0dev98
server-instance fouram-tag-no-clean-nkjj4-1 server-configmap server-cluster-8c64m-kafka client-configmap client-random-locust-100m-ddl-r8-w2
fouram-tag-no-clean-nkjj4-1-etcd-0 1/1 Running 0 20h 10.104.1.246 4am-node10 <none> <none>
fouram-tag-no-clean-nkjj4-1-etcd-1 1/1 Running 0 20h 10.104.5.110 4am-node12 <none> <none>
fouram-tag-no-clean-nkjj4-1-etcd-2 1/1 Running 0 20h 10.104.4.206 4am-node11 <none> <none>
fouram-tag-no-clean-nkjj4-1-kafka-0 1/1 Running 1 (20h ago) 20h 10.104.4.203 4am-node11 <none> <none>
fouram-tag-no-clean-nkjj4-1-kafka-1 1/1 Running 1 (20h ago) 20h 10.104.6.92 4am-node13 <none> <none>
fouram-tag-no-clean-nkjj4-1-kafka-2 1/1 Running 0 20h 10.104.9.129 4am-node14 <none> <none>
fouram-tag-no-clean-nkjj4-1-milvus-datacoord-bf66b74b4-6fjp8 1/1 Running 0 20h 10.104.4.200 4am-node11 <none> <none>
fouram-tag-no-clean-nkjj4-1-milvus-datanode-9d4498fb7-m792q 1/1 Running 0 20h 10.104.9.128 4am-node14 <none> <none>
fouram-tag-no-clean-nkjj4-1-milvus-indexcoord-67fccfcd95-zz8xv 1/1 Running 0 20h 10.104.5.105 4am-node12 <none> <none>
fouram-tag-no-clean-nkjj4-1-milvus-indexnode-74774bbcd4-xrslz 1/1 Running 0 20h 10.104.4.201 4am-node11 <none> <none>
fouram-tag-no-clean-nkjj4-1-milvus-proxy-6d65fd8dfc-zfjfx 1/1 Running 0 20h 10.104.5.104 4am-node12 <none> <none>
fouram-tag-no-clean-nkjj4-1-milvus-querycoord-97d7c58b4-phwl6 1/1 Running 0 20h 10.104.5.106 4am-node12 <none> <none>
fouram-tag-no-clean-nkjj4-1-milvus-querynode-f47544498-ffw2p 1/1 Running 0 20h 10.104.6.91 4am-node13 <none> <none>
fouram-tag-no-clean-nkjj4-1-milvus-rootcoord-6d997b87d4-gwnph 1/1 Running 0 20h 10.104.6.90 4am-node13 <none> <none>
fouram-tag-no-clean-nkjj4-1-minio-0 1/1 Running 0 20h 10.104.5.111 4am-node12 <none> <none>
fouram-tag-no-clean-nkjj4-1-minio-1 1/1 Running 0 20h 10.104.1.247 4am-node10 <none> <none>
fouram-tag-no-clean-nkjj4-1-minio-2 1/1 Running 0 20h 10.104.4.204 4am-node11 <none> <none>
fouram-tag-no-clean-nkjj4-1-minio-3 1/1 Running 0 20h 10.104.6.95 4am-node13 <none> <none>
fouram-tag-no-clean-nkjj4-1-zookeeper-0 1/1 Running 0 20h 10.104.5.108 4am-node12 <none> <none>
fouram-tag-no-clean-nkjj4-1-zookeeper-1 1/1 Running 0 20h 10.104.1.245 4am-node10 <none> <none>
fouram-tag-no-clean-nkjj4-1-zookeeper-2 1/1 Running 0 20h 10.104.6.93 4am-node13 <none> <none>
querynode memory:

Expected Behavior
No response
Steps To Reproduce
1、create collection
2、create index of ivf_sq8
3、insert 100m million vectors
4、flush collection
5、build index with the same params
6、load collection
7、locust concurrent: query<-search, load, get<-query, scene_test
Milvus Log
No response
Anything else?
client-random-locust-100m-ddl-r8-w2:
locust_random_performance:
collections:
-
collection_name: sift_100m_128_l2
ni_per: 50000
build_index: true
index_type: ivf_sq8
index_param:
nlist: 2048
task:
types:
-
type: query
weight: 8
params:
top_k: 10
nq: 10
search_param:
nprobe: 16
-
type: load
weight: 1
-
type: get
weight: 8
params:
ids_length: 10
-
type: scene_test
weight: 2
connection_num: 1
clients_num: 20
spawn_rate: 2
during_time: 302400
/assign @xige-16 /unassign
@xige-16 is this still a issue?
please run again @jingkl
/assign @jingkl
server-instance
fouram-tag-no-clean-d4qgk-1
server-configmap
server-cluster-8c64m-kafka
client-configmap
client-random-locust-100m-ddl-r8-w2-36h
pymilvus 2.1.1dev3
2.1.0-20220809-0e4dc112
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen
.
may be fixed by https://github.com/milvus-io/milvus/pull/19197 please verify @jingkl /assign @jingkl
server-instance
fouram-tag-no-clean-qfbpr-1
server-configmap
server-cluster-8c64m-kafka-redur0
client-configmap
client-random-locust-100m-ddl-r8-w2-12h
Image: 2.1.0-20220926-2753e054
This still looks like more memory use than the first @xige-16
@jingkl is there a analysis about current memory usage? Like what is the data size and memory expected
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen
.