builder
builder copied to clipboard
PyPI Packaging for C++ Extension
I have developed a C++ pytorch extension that I want to deploy on PyPI.
It currently isn't possible to build a manylinux compatible wheel if it depends on pytorch using the semi-official instructions (https://github.com/pypa/manylinux) because pytorch depends on a newer glibc version than manylinux allows (or at least that's the problem I ran into first).
Since the pytorch wheels work well and are not completely manylinux compliant, I think a similar procedure for extensions should be fine. I currently just build on my machine and rename the wheel from linux to manylinux, but then I cant run auditwheel
to include the dependencies.
Is there a known procedure for doing this?
this is what we do, instead of using auditwheel
: https://github.com/pytorch/builder/blob/master/manywheel/build_common.sh#L231-L323
- we unzip the wheel
- we copy over the dependencies
- we patch the RPATH for these dependencies
- we generated a new RECORD file in the wheel with the new files + hashes
- we zip back the wheel
I actually have something that almost works using multilinux2014 but the annoying thing is that auditwheel keeps trying to add all of the pytorch libraries into my wheel, which I think is not necessary since pytorch should be pip installed by anyone using my library
So what I ended up doing was using the manylinux2014 docker image (quay.io/pypa/manylinux2014_x86_64) and statically linking my only C dependency besides pytorch. This seems to be working since the manylinux2014 image has pretty wide glibc compatibility and also works with pytorch. The static linking trick may not work for everyone, if the project has a lot of dependencies then it may not be practical and you'd need to fix up the wheel by copying dependencies.
One thing that would help is to factor out your wheel fixup method into something standalone that other people could use.
I realise this is an old issue, but the state of play is essential the same nearly two years on. I realy think an easy solution for this problem could be a nice boost for the pytorch ecosystem.