gsplat icon indicating copy to clipboard operation
gsplat copied to clipboard

publishing pre-compiled windows wheels on pypi.org

Open martinResearch opened this issue 1 year ago • 6 comments
trafficstars

It would greatly simplify the installation of gsplat on windows if pre-compiled wheels for windows were also published on pypi.org. Is this in the roadmap? if not, would you potentially approve a PR to enable this?

martinResearch avatar Aug 15 '24 14:08 martinResearch

Sounds good! Contributions like these are more than welcome.

maturk avatar Aug 15 '24 15:08 maturk

+1

Totoro97 avatar Aug 16 '24 05:08 Totoro97

following the approach used by cupy to specify the cuda version (https://pypi.org/project/cupy/) we could publish the packages gsplat-cuda12x and gsplat-cuda11x with precompiled wheels and continue using the package gsplat for the source distribution. how does that sound? I am not sure if we need different packages for different pytorch versions too.

martinResearch avatar Aug 16 '24 12:08 martinResearch

I think the most proper way to precompile is to cross CUDA and pytorch versions, similar to this. But also gsplat is fairly lightly dependent on pytorch so a precompile wheel using a specific pytorch version might just work across a wide range of pytorch versions. (Only my guess never actually tried it out)

With that, I think I'm supportive with the idea of precompiling it for cuda12 and cuda11. Note this means it needs to compile over a long list of TORCH_CUDA_ARCH_LIST to support various types of GPUs.

As for hosting the precompiled wheels on pypi, is it possible to put the wheels into the same pypi repo and install via something like pip install gsplat[cu12] or pip install gsplat[cu12]? Basically make it to pull the compiled .so or .lib from somewhere when some argument are specified with pip. Putting things in the same repo would avoid the case where a user installs both gsplat and gsplat-cuda12x in the environment, and it would be confusing where import gsplat is actually importing from.

liruilong940607 avatar Aug 16 '24 17:08 liruilong940607

In order to help the conversation I tried to list different options with potential drawbacks.

Option 1

We make explicit the pytorch and CUDA version in the package version name an publish in pypi.org/gsplat. One will use pip install gsplat==1.2.0+pt2.0+cu118 for example. This would allow to provide precompiled wheel for all these combinations:

Windows & Linux cu113 cu115 cu116 cu117 cu118
torch 1.11.0      
torch 1.12.0      
torch 1.13.0      
torch 2.0.0      

One drawback is one will need to pin the gsplat version in its requirements files and then manually update the version when a new gsplat version is published.

Option 2

We make explicit the only CUDA version in the package version name an publish in pypi.org/gsplat and not the pytorch version. One will use pip install gsplat==1.2.0+cu118 for example. For each CUDA version we support a single pytorch version, the most recent version compatible with that CUDA version. We will then support only

Windows & Linux cu113 cu115 cu116 cu117 cu118
torch 1.11.0      
torch 1.12.0      
torch 1.13.0      
torch 2.0.0      

Like option 1, one drawback is one will need to pin the gsplat version in its requirements files and manually update the version when a new gsplat version is published. It seems that the package torchvision uses this approach.

Option 3

We follow the approach used by [cupy]((https://pypi.org/project/cupy) to specify the CUDA version. We publish the packages gsplat-cuda12x and gsplat-cuda11x (in different pypi pages) with precompiled wheels and continue using the package gsplat for the source distribution. Like option 2, for each CUDA version we support a single pytorch version, the most recent version compatible with that CUDA version. One drawback of this approach is that if the user installs both gsplat-cuda11xand gsplat-cuda12x in the environment, both packages will create the folder gsplat under python environment Lib\site-packages filer and there could be file names collisions in that folder. I am not sure how cupy deals with this problem.

Option 4

We follow the approach suggested by @liruilong940607: Use package extras to specify the cuda version i.e. use pip install gsplat[cu12] or pip install gsplat[cu12]. It seems that we would need to support a single pytorch version for each CUDA version ( the most recent version compatible with that CUDA version, as options 2 and 3) as it does not seems possible to specify both the CUDA version and pytorch version as extras. This approach does not seem to correspond to the original use case for which extras have been developed, and thus I am not sure what could be the drawbacks of this approach. It is unclear to me if we can prevent one from using pip install gsplat[cu12,cu11] for example. Are there other libraries that use this approach?

Are there other options?

martinResearch avatar Aug 17 '24 16:08 martinResearch

It seems that some use case require jit compilation (for example the tests test_rasterize_to_pixels https://github.com/nerfstudio-project/gsplat/blob/45d196a3611e627c498f237c27014cdaf42d94d2/tests/test_basic.py#L442 ), is that correct? In that case providing pre-compiled wheels might be a limited interest. Any though on that?

martinResearch avatar Aug 21 '24 17:08 martinResearch

I experimented with the solution 1 listed above by publishing the precompiled wheels directly in the github repo releases and got it working after many iterations :) Draft PR https://github.com/nerfstudio-project/gsplat/pull/365 One can use the following line to install the latest precompiled wheel for pytorch 2.0 and cuda 11.8 using wheels release in my fork

pip install gsplat --index-url https://martinresearch.github.io/gsplat/whl/pt20cu118

One can also omit the pt20cu118 and use

pip install gsplat==1.2.0+pt20cu118 --index-url https://martinresearch.github.io/gsplat/whl

but then the full version needs to be specified to make sure one gets the right pytorch and cuda version.

The full list of pre-compiled wheel can be found here Is that a solution you would potentially be happy to merge in your repository?

martinResearch avatar Aug 23 '24 15:08 martinResearch

Closing as we now have prebuilt wheels supported. Thanks @martinResearch !

liruilong940607 avatar Sep 27 '24 07:09 liruilong940607