qiskit-aer icon indicating copy to clipboard operation
qiskit-aer copied to clipboard

Binary distribution for cuQuantum support

Open doichanj opened this issue 3 years ago • 3 comments

Summary

Add binary distribution for cuQuantum support

Details and comments

Added binary build workflow in .github/workflow/deploy.yml for cuQuantum support.

User will be able to install by pip install qiskit-aer-cuQuantum

doichanj avatar Jun 15 '22 10:06 doichanj

https://github.com/doichanj/qiskit-aer/runs/7084200066?check_suite_focus=true

doichanj avatar Jun 28 '22 14:06 doichanj

I think large file size issue is a common problem for CUDA 11, but cuQuantum required CUDA 11 or above.

https://discuss.python.org/t/what-to-do-about-gpus-and-the-built-distributions-that-support-them/7125

doichanj avatar Jun 29 '22 08:06 doichanj

Since we're hitting issues with pypi file size, maybe we can use: https://github.com/softprops/action-gh-release to add the wheels as a release artifacts on github for the time being.

mtreinish avatar Jul 26 '22 17:07 mtreinish

@doichanj we're preparing detailed instruction for cuQuantum downstream to avoid hitting the file size limit, please stay tuned 🙂

Generally, if it's only ~300 MB it's possible to request size limit bump by submitting a ticket to https://github.com/pypa/pypi-support/issues, but we do not consider it a nice experience for users and developers. We'd rather advocate for the following changes:

  • use dynamic instead of static linking
    • cuQuantum already have wheels such as custatevec-cu11 and cutensornet-cu11 living on PyPI
    • NVIDIA CUDA wheels are also up (nvidia-*-cu11 for CUDA 11 and nvidia-*-cu12 for CUDA 12, ex: nvidia-cublas-cu11)
  • use auditwheel repair --exclude (https://github.com/pypa/auditwheel/pull/368) to whitelist certain shared libraries
    • it's possible to not do repair at all, if you already build the wheel in a manylinux container and meet certain conditions

However there are some caveats in terms of the build system, we'll try to explain them in the upcoming instruction. This is how our cuquantum-python-cuXX wheels can be kept under very comfortable size.

leofang avatar Dec 16 '22 18:12 leofang

@doichanj we're preparing detailed instruction for cuQuantum downstream to avoid hitting the file size limit, please stay tuned 🙂

FYI: https://github.com/NVIDIA/cuQuantum/tree/main/extra/demo_build_with_wheels, let us know if it helps! 🙂

leofang avatar Jan 26 '23 14:01 leofang

no longer necessary because NVIDIA provides appliance of qiskit-aer-gpu with cuQuantum.

hhorii avatar Apr 18 '23 13:04 hhorii

Hi @hhorii, would it be possible to keep this PR opened? Even with our cuquantum Appliance container, we can't cover all use cases, and according to our understanding (@tlubowe knows far better than I do 🙂) the majority of users still want a simple pip-install based solution, without container. If you don't have bandwidth for this PR, maybe @tlubowe can find someone on our side to help?

leofang avatar Apr 18 '23 14:04 leofang