xtensor
xtensor copied to clipboard
Allow installation via pip?
Currently, xtensor can be installed via Conda. However, it feels as though the installation mechanisms provided by pip
have grown very advanced with good design, and hence it is useful to allow users to install xtensor
via (e.g.) wheel metadata from some other package or simply a requirements.txt
that is being managed by pip
rather than conda
.
If xtensor
provided a wheel file, then the conda
installation should continue to "just work" via the wheel, and the wheel format is (IMO) very well engineered and a joy to work with.
(Feature request).
Thanks for the suggestions @NAThompson . I must say that I get a bit confused ;)
pip
belongs to Python. I suppose that it can install xtensor
's headers in Python's own include
directory, for the headers to be found when building with setuptools
. That sound like a fair mechanism, that I'd be 'happy' to support (depends a bit on the cleanliness of the code, I have had some bad experience with Python installers). Indeed I used to build my Python extensions that way (https://github.com/tdegeus/pyxtensor/blob/master/setup.py ), before I switched to CMake + scikit-build because I did not like the lack of customisation options and because I preferred maintaining only one build system.
Now, our conda package is something different. It installs xtensor
headers in a 'system' include
of the environment, and likewise installs CMake support. I would not think that Python could do that, so unless I'm mistaken there are really to different things?
@tdegeus : I should give you some more context: I have a pybind11-based library which has non-trivial code on both sides of the language boundary and this is distributed as a python wheel file which deploys the python code, cmake files, headers, and libraries. My users also write code on both sides of the language boundary.
So when we install (say) pybind11, we do it via pip, and the cmake files go into VIRTUAL_ENV/share/cmake
and the headers go into VIRTUAL_ENV/include
and my users don't have to do anything extra, since pip install pybind11[global]
puts all the C++ stuff where cmake
expects it.
Now, if I want my users to publicly inherit the xtensor
dependency , I have to vendorize the headers and deploy it with my wheel. Is that a disaster? Well, no, it absolutely works, it's just a bit inelegant, since I also need to vendorize xtensor-blas
.
Anyway, the key idea is that my users need to build with CMake in a virtualenv, and a pip-installable wheel file is perfect for that.
I did not know that that could be done. Sound completely reasonable to support this. What would it involve?
I glanced at pybind11. How does it work with CMake configuring the CMake target?
Would you be willing to contribute with a PR?
Hi, I am not in favor of this. Pip is a Python package manager, not a generic one. Building wheels for native packages can quickly become a nightmare (ok maybe not here since it's a header only library, but as soon as compilation is involved, and you have dependencies on other native packages, the "fun" begins); more generally package managers should be agnostic to the language, not specific.
Independently from my point of view, if such a package exists, it should not be intrusive and should live outside of the xtensor repo (like the packages for Conda, Debian or Fedora).
An alternative solution could be to package xtensor-python for pip (and have it vendoring xtensor), this would be consistent with what we've done for Jullia and R (although I really dislike this idea, we did it because we didn't have the choice). This should be done in a dedicated repo, though (meaning we would probably split xtensor-python in a pure C++ package and a pure pytoh package vendoring the C++ dependencies).