Feature Request: Implement a Benchmark Page and a Built-in Benchmark Command
Background
As the ABACUS community grows, users with diverse hardware and software environments are joining the project. For new users, it is often challenging to:
- Estimate the performance they can expect from their specific hardware (CPU, GPU, memory).
- Make informed decisions when purchasing new hardware for running ABACUS.
- Optimally configure their parallelization settings (MPI processes vs. OpenMP threads).
- Understand the performance implications of using different compilers (e.g., GNU vs. Intel) or different ABACUS versions.
Currently, there is no centralized, official resource for performance data, which makes the initial setup and optimization process a matter of trial and error for many. Mature, large-scale scientific software packages often provide benchmark data or tools to address these issues, enhancing user experience and providing valuable insights for both users and developers.
Describe the solution you'd like
To address this, I would like to propose the implementation of a two-part benchmark feature:
Part 1: A Public Benchmark Webpage
I suggest creating a dedicated "Benchmark" page on the official ABACUS website or GitHub Wiki. This page would serve as a public repository for performance data, collected from various standardized test cases.
The page should ideally present a comparison matrix, detailing the average time per electronic/SCF step for a consistent set of calculations. The comparison should cover variables such as:
- Hardware:
- CPU Models (e.g., Intel Xeon Gold 6248R, AMD EPYC 7742)
- GPU Models (if applicable)
- Memory configuration
- Software Environment:
- Compiler (e.g., GCC 9.3, Intel oneAPI 2022)
- MPI Library (e.g., OpenMPI, Intel MPI)
- ABACUS version
- Parallelization:
- Number of MPI processes
- Number of OpenMP threads per process
- Calculation Type (for a fixed system, e.g., a 64-atom Si bulk):
- LCAO with an "efficient" basis set
- LCAO with a "precision" basis set
- Plane-wave (PW) basis with a given energy cutoff
This would provide an invaluable reference for users to gauge expected performance, configure their systems, and track performance improvements across new ABACUS releases.
Part 2: A Built-in Benchmark Command
In addition to a static webpage, a built-in benchmark command would empower users to assess the performance of their own specific hardware and environment.
This could be implemented as a simple command, for example: abacus --benchmark.
This command would:
- Run one or more pre-defined, standardized calculations that are packaged with the software.
- Provide a clear and concise output summarizing the system's processing capability. For example: "On this machine, a 100-atom system with 4 k-points takes an average of X seconds per SCF step."
- (Advanced Feature) Automatically test and suggest an optimal combination of MPI processes and OpenMP threads for the user's current node configuration. For instance, it could run a short test with different
OMP_NUM_THREADSsettings to find the sweet spot for a given number of MPI ranks on a node.
Conclusion
Implementing these benchmark features would significantly lower the barrier to entry for new users, provide critical guidance for hardware configuration, and offer a systematic way to track the software's performance evolution. It is a hallmark of a mature and user-friendly scientific computing package and would be a highly valuable addition to the ABACUS project.
Task list only for developers
- [ ] Notice possible changes of behavior
- [ ] Explain the changes of codes in core modules of ESolver, HSolver, ElecState, Hamilt, Operator or Psi
Notice Possible Changes of Behavior (Reminder only for developers)
No response
Notice any changes of core modules (Reminder only for developers)
No response
Notice Possible Changes of Core Modules (Reminder only for developers)
No response
Additional Context
No response
Task list for Issue attackers (only for developers)
- [ ] Review and understand the proposed feature and its importance.
- [ ] Research on the existing solutions and relevant research articles/resources.
- [ ] Discuss with the team to evaluate the feasibility of implementing the feature.
- [ ] Create a design document outlining the proposed solution and implementation details.
- [ ] Get feedback from the team on the design document.
- [ ] Develop the feature following the agreed design.
- [ ] Write unit tests and integration tests for the feature.
- [ ] Update the documentation to include the new feature.
- [ ] Perform code review and address any issues.
- [ ] Merge the feature into the main branch.
- [ ] Monitor for any issues or bugs reported by users after the feature is released.
- [ ] Address any issues or bugs reported by users and continuously improve the feature.
Good Job! This will also be part of ABACUS knowledge&case base for ABACUS Agent. I do consider this benchmark is needed. @dyzheng @mohanchen Do you have more idea or implementation plan ?
This is a good idea, we should consider this.
Currently we don't have enough developers to do so.
Currently we don't have enough developers to do so.
I consider that we can open-up an ABACUS benchmark initiative. we can set a questionnaire for community users to contribute, and manage the case base and benchmark for the usage of both user and developer.