flashinfer icon indicating copy to clipboard operation
flashinfer copied to clipboard

Downloadable Package in PyPI

Open WoosukKwon opened this issue 11 months ago • 5 comments

Thanks again for the nice project! Are you interested in uploading the wheels (for CUDA 12.1) to PyPI? This will help users manage the dependency on the FlashInfer library.

WoosukKwon avatar Mar 04 '24 20:03 WoosukKwon

Thanks again for the nice project! Are you interested in uploading the wheels (for CUDA 12.1) to PyPI? This will help users manage the dependency on the FlashInfer library.

@WoosukKwon Perhaps we may temporarily resolve this by using the command such as

pip3 install https://github.com/flashinfer-ai/flashinfer/releases/download/v0.0.2/flashinfer-0.0.2+cu121torch2.1-cp39-cp39-linux_x86_64.whl

@yzh119 If we want to support the workflow of PyPI, we may refer to https://github.com/InternLM/lmdeploy/blob/main/.github/workflows/pypi.yml.

zhyncs avatar Mar 05 '24 05:03 zhyncs

Hi @WoosukKwon , thanks for the suggestion, my only concern is the binary size, considering there are many different combinations of python version+cuda version+pytorch version (each wheel is ~500mb), and I used to get warnings because of the large binary size. Did vllm upload all wheels to PyPI?

@zhyncs thanks for your reference:

Perhaps we may temporarily resolve this by using the command such as

PyPI has a unique advantage that other packages can set flashinfer as their dependencies, and I do think it's preferable to upload flashinfer to PyPI.

yzh119 avatar Mar 05 '24 14:03 yzh119

@yzh119 I see. What we need at the moment are the Python 3.8-3.11 wheels built for PyTorch 2.1.2 + CUDA 12.1. However, we do agree that maintaining compatibility between the two libraries is quite tricky.

Alternatively, we're currently considering importing FlashInfer as a submodule and building the kernels by ourselves. However, we found that the compilation time of FlashInfer is too long (30+ mins on our machine). Do you have any idea to reduce the time?

WoosukKwon avatar Mar 06 '24 00:03 WoosukKwon

@yzh119 Also, do you mind if the vLLM team hosts specific PyTorch + CUDA versions of FlashInfer in PyPI under the name of vllm-flashinfer-mirror or something like that? This will give us more control over the compatibility issue.

WoosukKwon avatar Mar 06 '24 07:03 WoosukKwon

@yzh119 Also, do you mind if the vLLM team hosts specific PyTorch + CUDA versions of FlashInfer in PyPI under the name of vllm-flashinfer-mirror or something like that? This will give us more control over the compatibility issue.

Sounds good.

zhyncs avatar Mar 06 '24 08:03 zhyncs

FlashInfer currently requires support for Python 3.8 to 3.12, CUDA 11.8, 12.1, and 12.4, as well as Torch 2.1 to 2.4. The number and size of whl is exceeding PyPI's limits. Please follow the recommended installation method at https://docs.flashinfer.ai/installation.html

zhyncs avatar Aug 27 '24 05:08 zhyncs