milvus icon indicating copy to clipboard operation
milvus copied to clipboard

Upgrade go from 1.20 to 1.21

Open shaoting-huang opened this issue 1 year ago • 12 comments

Signed-off-by: shaoting-huang [[email protected]]

issue: https://github.com/milvus-io/milvus/issues/32982

Background

Go 1.21 introduces several improvements and changes over Go 1.20, which is quite stable now. According to Go 1.21 Release Notes, the big difference of Go 1.21 is enabling Profile-Guided Optimization by default, which can improve performance by around 2-14%. Here are the summary steps of PGO:

  1. Build Initial Binary (Without PGO)
  2. Deploying the Production Environment
  3. Run the program and collect Performance Analysis Data (CPU pprof)
  4. Analyze the Collected Data and Select a Performance Profile for PGO
  5. Place the Performance Analysis File in the Main Package Directory and Name It default.pgo
  6. go build Detects the default.pgo File and Enables PGO
  7. Build and Release the Updated Binary (With PGO)
  8. Iterate and Repeat the Above Steps Screenshot 2024-05-14 at 15 57 01

What does this PR do

There are three experiments, search benchmark by Zilliz test platform, search benchmark by open-source VectorDBBench, and search benchmark with PGO. We do both search benchmarks by Zilliz test platform and by VectorDBBench to reduce reliance on a single experimental result. Besides, we validate the performance enhancement with PGO.

Search Benchmark Report by Zilliz Test Platform

An upgrade to Go 1.21 was conducted on a Milvus Standalone server, equipped with 16 CPUs and 64GB of memory. The search performance was evaluated using a 1 million entry local dataset with an L2 metric type in a 768-dimensional space. The system was tested for concurrent searches with 50 concurrent tasks for 1 hour, each with a 20-second interval. The reason for using one server rather than two servers to compare is to guarantee the same data source and same segment state after compaction.

Test Sequence:

  1. Go 1.20 Initial Run: Insert data, build index, load index, and search.
  2. Go 1.20 Rebuild: Rebuild the index with the same dataset, load index, and search.
  3. Go 1.21 Load: Upload to Go 1.21 within the server. Then load the index from the second run, and search.
  4. Go 1.21 Rebuild: Rebuild the index with the same dataset, load index, and search.

Search Metrics:

Metric Go 1.20 Go 1.20 Rebuild Index Go 1.21 Go 1.21 Rebuild Index
search requests 10,942,683 16,131,726 16,200,887 16,331,052
search fails 0 0 0 0
search RT_avg (ms) 16.44 11.15 11.11 11.02
search RT_min (ms) 1.30 1.28 1.31 1.26
search RT_max (ms) 446.61 233.22 235.90 147.93
search TP50 (ms) 11.74 10.46 10.43 10.35
search TP99 (ms) 92.30 25.76 25.36 25.23
search RPS 3,039 4,481 4,500 4,536

Key Findings

The benchmark tests reveal that the index build time with Go 1.20 at 340.39 ms and Go 1.21 at 337.60 ms demonstrated negligible performance variance in index construction. However, Go 1.21 offers slightly better performance in search operations compared to Go 1.20, with improvements in handling concurrent tasks and reducing response times.

Search Benchmark Report By VectorDb Bench

Follow VectorDBBench to create a VectorDb Bench test for Go 1.20 and Go 1.21. We test the search performance with Go 1.20 and Go 1.21 (without PGO) on the Milvus Standalone system. The tests were conducted using the Cohere dataset with 1 million entries in a 768-dimensional space, utilizing the COSINE metric type.

Search Metrics:

Metric Go 1.20 Go 1.21 without PGO
Load Duration (seconds) 1195.95 976.37
Queries Per Second (QPS) 841.62 875.89
99th Percentile Serial Latency (seconds) 0.0047 0.0076
Recall 0.9487 0.9489

Key Findings

Go 1.21 indicates faster index loading times and larger search QPS handling.

PGO Performance Test

Milvus has already added net/http/pprof in the metrics. So we can curl the CPU profile directly by running curl -o default.pgo "http://${MILVUS_SERVER_IP}:${MILVUS_SERVER_PORT}/debug/pprof/profile?seconds=${TIME_SECOND}" to collect the profile as the default.pgo during the first search. Then I build Milvus with PGO and use the same index to run the search again. The result is as below:

Search Metrics

Metric Go 1.21 Without PGO Go 1.21 With PGO Change (%)
search Requests 2,644,583 2,837,726 +7.30%
search Fails 0 0 N/A
search RT_avg (ms) 11.34 10.57 -6.78%
search RT_min (ms) 1.39 1.32 -5.18%
search RT_max (ms) 349.72 143.72 -58.91%
search TP50 (ms) 10.57 9.93 -6.05%
search TP99 (ms) 26.14 24.16 -7.56%
search RPS 4,407 4,729 +7.30%

Key Findings

PGO led to a notable enhancement in search performance, particularly in reducing the maximum response time by 58% and increasing the search QPS by 7.3%.

Further Analysis

Generate a diff flame graphs between two CPU profiles by running go tool pprof -http=:8000 -diff_base nopgo.pgo pgo.pgo -normalize

goprofiling Further insight of HnswIndexNode and Milvus Search Handler hnsw search_handler

After applying PGO to the Milvus server, the CPU utilization of the faiss::fvec_L2 function has decreased. This optimization significantly enhances the performance of the HnswIndexNode::Search::searchKnn method, which is frequently invoked by Knowhere during high-concurrency searches. As the explanation from Go release notes, the function might be more aggressively inlined by Go compiler during the second build with the CPU profiling collected from the first run. As a result, the search handler efficiency within Milvus DataNode has improved, allowing the server to process a higher number of search queries per second (QPS).

Conclusion

The combination of Go 1.21 and PGO has led to substantial enhancements in search performance for Milvus server, particularly in terms of search QPS and response times, making it more efficient for handling high-concurrency search operations.

shaoting-huang avatar May 14 '24 08:05 shaoting-huang

@shaoting-huang

Invalid PR Title Format Detected

Your PR submission does not adhere to our required standards. To ensure clarity and consistency, please meet the following criteria:

  1. Title Format: The PR title must begin with one of these prefixes:
  • feat: for introducing a new feature.
  • fix: for bug fixes.
  • enhance: for improvements to existing functionality.
  • test: for add tests to existing functionality.
  • doc: for modifying documentation.
  • auto: for the pull request from bot.
  1. Description Requirement: The PR must include a non-empty description, detailing the changes and their impact.

Required Title Structure:

[Type]: [Description of the PR]

Where Type is one of feat, fix, enhance, test or doc.

Example:

enhance: improve search performance significantly 

Please review and update your PR to comply with these guidelines.

mergify[bot] avatar May 14 '24 08:05 mergify[bot]

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 82.13%. Comparing base (1ef975d) to head (74395f0). Report is 45 commits behind head on master.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##           master   #33047      +/-   ##
==========================================
- Coverage   82.14%   82.13%   -0.01%     
==========================================
  Files        1006     1006              
  Lines      128452   128452              
==========================================
- Hits       105513   105510       -3     
- Misses      18948    18952       +4     
+ Partials     3991     3990       -1     

see 30 files with indirect coverage changes

codecov[bot] avatar May 14 '24 10:05 codecov[bot]

just curious. Can this process be combined with collecting C++ profiling info as well? Concurrent searches need to be run anyways.

alexanderguzhva avatar May 14 '24 14:05 alexanderguzhva

@shaoting-huang Thanks for your contribution. Please submit with DCO, see the contributing guide https://github.com/milvus-io/milvus/blob/master/CONTRIBUTING.md#developer-certificate-of-origin-dco.

mergify[bot] avatar May 15 '24 03:05 mergify[bot]

@shaoting-huang E2e jenkins job failed, comment /run-cpu-e2e can trigger the job again.

mergify[bot] avatar May 15 '24 10:05 mergify[bot]

@shaoting-huang E2e jenkins job failed, comment /run-cpu-e2e can trigger the job again.

mergify[bot] avatar May 16 '24 08:05 mergify[bot]

/approve

very detailed test result! GO 1.21 seesm to improve the search P99 latency a lot

xiaofan-luan avatar May 16 '24 09:05 xiaofan-luan

@shaoting-huang E2e jenkins job failed, comment /run-cpu-e2e can trigger the job again.

mergify[bot] avatar May 16 '24 10:05 mergify[bot]

@shaoting-huang ut workflow job failed, comment rerun ut can trigger the job again.

mergify[bot] avatar May 16 '24 10:05 mergify[bot]

just curious. Can this process be combined with collecting C++ profiling info as well? Concurrent searches need to be run anyways.

Yes, as the profiling graph above, pprof collects go profiling and c++ profiling.

shaoting-huang avatar May 16 '24 11:05 shaoting-huang

@shaoting-huang Thanks for your contribution. Please submit with DCO, see the contributing guide https://github.com/milvus-io/milvus/blob/master/CONTRIBUTING.md#developer-certificate-of-origin-dco.

mergify[bot] avatar May 16 '24 11:05 mergify[bot]

rerun ut

smellthemoon avatar May 17 '24 06:05 smellthemoon

/lgtm /approve

xiaofan-luan avatar May 22 '24 05:05 xiaofan-luan

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: shaoting-huang, xiaofan-luan

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment

sre-ci-robot avatar May 22 '24 05:05 sre-ci-robot