elstic
elstic
This issue still exists, verified version: master-20230714-fc9a6dc2 Build index after inserting 100 million data 78670.0845s
in recent tests, build index time consuming back to normal Verify image: master-20230802-830f0678 build index: 18627.3541s
It is in effect in the 2.2.0 branch, and the 2.2.10 version does not have this performance degradation issue
updage: image: master-20230530-b09e7aea insert 100m data load successfully
This issue did not appear recently
The issue arises again image: master-20230504-e172f3e8 server: ``` fouram-93-8048-etcd-0 1/1 Running 0 9m21s 10.104.1.34 4am-node10 fouram-93-8048-etcd-1 1/1 Running 0 9m21s 10.104.16.91 4am-node21 fouram-93-8048-etcd-2 1/1 Running 0 9m21s 10.104.21.178 4am-node24 fouram-93-8048-milvus-datacoord-7595dbc5c4-kp887...
> /assign @elstic fixed with #24898 load 100 million data failures. image: master-20230620-247f1170 case : test_concurrent_locust_100m_diskann_ddl_dql_filter_cluster server: ``` fouramf-x8wsv-92-7100-etcd-0 1/1 Running 0 18h 10.104.23.251 4am-node27 fouramf-x8wsv-92-7100-etcd-1 1/1 Running 0 18h...
> /assign @elstic Please try with #25469 @yah01 diskann insert 100k data load failed. case: test_concurrent_locust_diskann_compaction_standalone image: master-20230711-70c4ddc6 client log: ``` [2023-07-11 20:07:16,394 - INFO - fouram]: [Base] Start inserting,...
After verification, inserting 100 million data, can load successfully. Verify image: master-20230719-e418ab2f
The diskann index to insert 100 million data loads failed. image: master-20230728-c2693ea2 argo task : fouramf-concurrent-jhgfh, id : 1 case: test_concurrent_locust_100m_diskann_ddl_dql_filter_cluster server: ``` fouram-15-6355-etcd-0 1/1 Running 0 7h17m 10.104.14.212 4am-node18...