xmake
xmake copied to clipboard
xmake无法找到conda安装的包
Xmake 版本
v2.8.6
操作系统版本和架构
Ubuntu 18.04.6 LTS
描述问题
#1314 中说添加了对conda包的支持,但是我尝试使用 conda install 包,依然无法被xmake找到。
通过如下命令建立conda环境:
conda create -n test python=3.10 cudnn
conda activate test
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia
在xmake.lua中添加对python与pytorch的依赖
add_requires("conda::python", {alias = "python", system = true})
add_requires("conda::pytorch", {alias = "libtorch", system = true})
xmake -vD之后依然提示找不到python与pytorch这两个包。
是需要什么设置来指定正在使用的conda环境吗?还是我的xmake.lua的写法有问题?
期待的结果
xmake可以正常找到conda环境里的python与pytorch包
工程配置
add_rules("mode.debug", "mode.release")
set_languages("cxx14")
add_requires("conda::python", {alias = "python", system = true}) add_requires("conda::pytorch", {alias = "libtorch", system = true}) add_requires("openmp")
target("ops") add_packages("libtorch", "openmp", "python") set_kind("shared") add_includedirs("ops/include") add_files("ops//*.cpp", "ops//*.cu") add_cugencodes("native")
附加信息和错误日志
xmake -vD
Bot detected the issue body's language is not English, translate it automatically.
Title: xmake cannot find the package installed by conda
Xmake version
v2.8.6
Operating system version and architecture
Ubuntu 18.04.6 LTS
Describe the problem
#1314 says that support for the conda package has been added, but when I tried to use the conda install package, it still couldn't be found by xmake.
Create a conda environment with the following command:
conda create -n test python=3.10 cudnn
conda activate test
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia
Add dependencies on python and pytorch in xmake.lua
add_requires("conda::python", {alias = "python", system = true})
add_requires("conda::pytorch", {alias = "libtorch", system = true})
After xmake -vD, it still prompts that the python and pytorch packages cannot be found.
Are there any settings needed to specify the conda environment being used? Or is there something wrong with the way I write xmake.lua?
Expected results
xmake can normally find the python and pytorch packages in the conda environment
Project configuration
add_rules("mode.debug", "mode.release")
set_languages("cxx14")
add_requires("conda::python", {alias = "python", system = true}) add_requires("conda::pytorch", {alias = "libtorch", system = true}) add_requires("openmp")
target("ops") add_packages("libtorch", "openmp", "python") set_kind("shared") add_includedirs("ops/include") add_files("ops//*.cpp", "ops//*.cu") add_cugencodes("native")
Additional information and error logs
xmake -vD
把 system = true
去了,那个是找系统库,不是找 conan 库
把
system = true
去了,那个是找系统库,不是找 conan 库
@waruqi 去掉还是不行,还是找不到conda的包
现在我的xmake.lua是这样的
add_requires("conda::python", {alias = "python"})
add_requires("conda::pytorch", {alias = "libtorch"})
然后conda list的输出里的python跟pytorch:
python 3.10.13 h955ad1f_0 defaults
pytorch 1.13.1 py3.10_cuda11.6_cudnn8.3.2_0 pytorch
pytorch-cuda 11.6 h867d48c_1 pytorch
pytorch-mutex 1.0 cuda pytorch
贴 -vD 完整 logs
checking for platform ... linux
checking for architecture ... x86_64
checking for gcc ... /usr/bin/gcc
checkinfo: cannot runv(zig version), No such file or directory
checking for zig ... no
checkinfo: cannot runv(zig version), No such file or directory
checking for zig ... no
checking for unzip ... /usr/bin/unzip
checking for git ... /usr/bin/git
checking for gzip ... /bin/gzip
checking for tar ... /bin/tar
checking for ping ... /bin/ping
pinging the host(gitee.com) ... 46 ms
pinging the host(gitlab.com) ... 268 ms
pinging the host(github.com) ... 65535 ms
/usr/bin/git rev-parse HEAD
checking for gcc ... /usr/bin/gcc
checking for the c compiler (cc) ... gcc
checking for gcc ... /usr/bin/gcc
checking for the c++ compiler (cxx) ... gcc
checking for xmake-repo::openmp ... openmp
finding python from conda ..
checking for conda ... /usr/share/anaconda3/condabin/conda
checking for conda::python ... no
finding pytorch from conda ..
checking for conda::pytorch ... no
note: install or modify (m) these packages (pass -y to skip confirm)?
in conda:
-> conda::python latest
-> conda::pytorch latest
please input: y (y/n/m)
@waruqi 麻烦大佬了
Bot detected the issue body's language is not English, translate it automatically.
If you set system = true
, you are looking for the system library, not the conan library.
看着没啥问题,按它提示装不就好了,不要手动执行 conan 装,两个没必然关系
Bot detected the issue body's language is not English, translate it automatically.
Set
system = true
, that is looking for the system library, not the conan library
@waruqi It still doesn’t work after removing it. Now my xmake.lua looks like this
add_requires("conda::python", {alias = "python"})
add_requires("conda::pytorch", {alias = "libtorch"})
Then python and pytorch in the output of conda list:
python 3.10.13 h955ad1f_0 defaults
pytorch 1.13.1 py3.10_cuda11.6_cudnn8.3.2_0 pytorch
pytorch-cuda 11.6 h867d48c_1 pytorch
pytorch-mutex 1.0 cuda pytorch
Bot detected the issue body's language is not English, translate it automatically.
Paste -vD complete logs
Bot detected the issue body's language is not English, translate it automatically.
It seems that there is no problem. Just follow the prompts to install it. Do not manually perform the conan installation. The two are not necessarily related.
看着没啥问题,按它提示装不就好了,不要手动执行 conan 装,两个没必然关系
@waruqi 我回去试了一下,我把xmake.lua里改成这样了
add_requires("conda::python 3.10.11", {alias = "python"})
add_requires("conda::pytorch 1.13.1", {alias = "libtorch"})
xmake在装conda的包的时候就是调用了conda install python
来安装的,但是我的conda环境里已经有了python了,所以conda会说All requested packages already installed.
然后xmake还是找不到这个python。
新建一个conda环境倒是可以了,但是我还是遇到了几个问题
虽然python这样的包是可以直接安装的,但是有些包比如pytorch的安装会比较复杂 pytorch应该使用这样的命令安装:
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia
但xmake用的命令是
/usr/share/anaconda3/condabin/conda install -y -v pytorch=1.13.1
1、在xmake.lua里应该怎样加上pytorch
跟nvidia
这两个channel呢?我试了一下把pytorch的那行改成下面这样:
add_requires("conda::pytorch 1.13.1", {alias = "libtorch", configs={channel={"pytorch", "nvidia"}}})
虽然xmake好像是检测到这个channel的参数了
但它还是用了
/usr/share/anaconda3/condabin/conda install -y -v pytorch=1.13.1
这个命令来安装。
2、我希望conda可以使用同一条命令一起安装pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6
这些来保证pytorch的gpu版本是对的,那么xmake.lua里应该怎么写让xmake使用同一条conda install安装多个包一起解析依赖呢?
我试了一下直接这样写:
add_requires("conda::pytorch 1.13.1", {alias = "libtorch", configs={channel={"pytorch", "nvidia"}}}, "conda::pytorch-cuda 11.6", {alias = "pytorch-cuda", configs={channel={"pytorch", "nvidia"}}})
或者去掉中间的配置:
add_requires("conda::pytorch 1.13.1", "conda::pytorch-cuda 11.6")
xmake还是会分开两个来安装。
这样写:
add_requires("conda::pytorch 1.13.1 conda::pytorch-cuda 11.6")
xmake会识别成
/usr/share/anaconda3/condabin/conda install -y -v "pytorch=1.13.1 conda::pytorch-cuda 11.6"
然后conda就找不到包,报错了。
3、安装的pytorch找到的includedirs有点问题 现在找到的只有"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/include"这一个
{
linkdirs = {
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/functorch",
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch",
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/lib"
},
sysincludedirs = {
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/include"
},
links = {
"c10",
"shm",
"torch",
"torch_cpu",
"torch_global_deps",
"torch_python"
},
version = "1.13.1",
libfiles = {
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/functorch/_C.cpython-310-x86_64-linux-gnu.so",
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/_C.cpython-310-x86_64-linux-gnu.so",
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/_C_flatbuffer.cpython-310-x86_64-linux-gnu.so",
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/lib/libc10.so",
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/lib/libshm.so",
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/lib/libtorch.so",
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so",
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/lib/libtorch_global_deps.so",
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/lib/libtorch_python.so"
}
}
但是pytorch的include应该还有一个 "/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/include/csrc/api/include" 这个是我跟cmake那边对了一下找出来的。缺少这个就会导致找不到torch/all.h等头文件,编译不通过。
error: @programdir/core/main.lua:314: @programdir/actions/build/main.lua:148: @programdir/modules/async/runjobs.lua:320: @programdir/modules/private/action/build/object.lua:91: @programdir/modules/core/tools/gcc.lua:841: In file included from ops/include/device_registry.hpp:8:0,
from ops/include/cpp_helper.hpp:3,
from ops/ncrelu/ncrelu_cpu.cpp:1:
/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/include/torch/extension.h:4:10: fatal error: torch/all.h: No such file or directory
#include <torch/all.h>
^~~~~~~~~~~~~
compilation terminated.
stack traceback:
[C]: in function 'error'
[@programdir/core/base/os.lua:949]:
[@programdir/modules/core/tools/gcc.lua:841]: in function 'catch'
[@programdir/core/sandbox/modules/try.lua:123]: in function 'try'
[@programdir/modules/core/tools/gcc.lua:782]:
[C]: in function 'xpcall'
[@programdir/core/base/utils.lua:280]:
[@programdir/core/tool/compiler.lua:278]: in function 'compile'
[@programdir/modules/private/action/build/object.lua:91]: in function 'script'
[@programdir/modules/private/action/build/object.lua:122]: in function 'build_object'
[@programdir/modules/private/action/build/object.lua:147]: in function 'jobfunc'
[@programdir/modules/async/runjobs.lua:237]:
[C]: in function 'xpcall'
[@programdir/core/base/utils.lua:280]: in function 'trycall'
[@programdir/core/sandbox/modules/try.lua:117]: in function 'try'
[@programdir/modules/async/runjobs.lua:220]: in function 'cotask'
[@programdir/core/base/scheduler.lua:404]:
Bot detected the issue body's language is not English, translate it automatically.
It seems that there is no problem. Just follow the prompts to install it. Do not manually perform conan installation. The two are not necessarily related.
@waruqi I went back and tried it. I changed xmake.lua to look like this
add_requires("conda::python 3.10.11", {alias = "python"})
add_requires("conda::pytorch 1.13.1", {alias = "libtorch"})
When xmake installs conda packages, it calls conda install python
to install it, but my conda environment already has python, so conda will say All requested packages already installed.
and then xmake still cannot find it. This python.
It is possible to create a new conda environment, but I still encountered several problems
Although packages like python can be installed directly, the installation of some packages such as pytorch will be more complicated. pytorch should be installed using a command like this:
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia
But the command used by xmake is
/usr/share/anaconda3/condabin/conda install -y -v pytorch=1.13.1
- How to add the two channels
pytorch
andnvidia
to xmake.lua? I tried changing the pytorch line to the following:
add_requires("conda::pytorch 1.13.1", {alias = "libtorch", configs={channel={"pytorch", "nvidia"}}})
Although xmake seems to have detected the parameters of this channel
But it still works
/usr/share/anaconda3/condabin/conda install -y -v pytorch=1.13.1
This command to install.
- I hope conda can use the same command to install
pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6
to ensure that the gpu version of pytorch is correct, then xmake How should I write it in lua to let xmake use the same conda install to install multiple packages and resolve dependencies together?
I tried writing it directly like this:
add_requires("conda::pytorch 1.13.1", {alias = "libtorch", configs={channel={"pytorch", "nvidia"}}}, "conda::pytorch-cuda 11.6", {alias = " pytorch-cuda", configs={channel={"pytorch", "nvidia"}}})
Or remove the intermediate configuration:
add_requires("conda::pytorch 1.13.1", "conda::pytorch-cuda 11.6")
xmake will still be installed separately.
Write this:
add_requires("conda::pytorch 1.13.1 conda::pytorch-cuda 11.6")
xmake will recognize it as
/usr/share/anaconda3/condabin/conda install -y -v "pytorch=1.13.1 conda::pytorch-cuda 11.6"
- There is a problem with the includedirs found by the installed pytorch. The only one found now is "/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/include"
{
linkdirs = {
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/functorch",
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch",
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/lib"
},
sysincludedirs = {
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/include"
},
links = {
"c10",
"shm",
"torch",
"torch_cpu",
"torch_global_deps",
"torch_python"
},
version = "1.13.1",
libfiles = {
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/functorch/_C.cpython-310-x86_64-linux-gnu.so",
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/_C.cpython-310-x86_64-linux-gnu.so",
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/_C_flatbuffer.cpython-310-x86_64-linux-gnu.so",
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/lib/libc10.so",
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/lib/libshm.so",
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/lib/libtorch.so",
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so",
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/lib/libtorch_global_deps.so",
"/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/lib/libtorch_python.so"
}
}
But there should be another include for pytorch "/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/include/csrc/api/include" I checked with cmake and found this out. Lack of this will result in header files such as torch/all.h not being found and compilation failing.
error: @programdir/core/main.lua:314: @programdir/actions/build/main.lua:148: @programdir/modules/async/runjobs.lua:320: @programdir/modules/private/action/build/ object.lua:91: @programdir/modules/core/tools/gcc.lua:841: In file included from ops/include/device_registry.hpp:8:0,
from ops/include/cpp_helper.hpp:3,
from ops/ncrelu/ncrelu_cpu.cpp:1:
/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/include/torch/extension.h:4:10: fatal error: torch/all.h : No such file or directory
#include <torch/all.h>
^~~~~~~~~~~~~
compilation terminated.
stack traceback:
[C]: in function 'error'
[@programdir/core/base/os.lua:949]:
[@programdir/modules/core/tools/gcc.lua:841]: in function 'catch'
[@programdir/core/sandbox/modules/try.lua:123]: in function 'try'
[@programdir/modules/core/tools/gcc.lua:782]:
[C]: in function 'xpcall'
[@programdir/core/base/utils.lua:280]:
[@programdir/core/tool/compiler.lua:278]: in function 'compile'
[@programdir/modules/private/action/build/object.lua:91]: in function 'script'
[@programdir/modules/private/action/build/object.lua:122]: in function 'build_object'
[@programdir/modules/private/action/build/object.lua:147]: in function 'jobfunc'
[@programdir/modules/async/runjobs.lua:237]:
[C]: in function 'xpcall'
[@programdir/core/base/utils.lua:280]: in function 'trycall'
[@programdir/core/sandbox/modules/try.lua:117]: in function 'try'
[@programdir/modules/async/runjobs.lua:220]: in function 'cotask'
[@programdir/core/base/scheduler.lua:404]:
xmake在装conda的包的时候就是调用了conda install python来安装的,但是我的conda环境里已经有了python了,所以conda会说All requested packages already installed.然后xmake还是找不到这个python。
走 conan/vcpkg 目前只能装库,工具类的不支持,你还不如直接 add_requires("python")
了,优先找系统,没有从 xmake-repo 装,干嘛非得要走 conan。。
在xmake.lua里应该怎样加上pytorch跟nvidia这两个channel呢?我试了一下把pytorch的那行改成下面这样:
目前没提供这个参数,不要乱传参。根据文档说明来,https://xmake.io/#/zh-cn/package/remote_package?id=%e6%b7%bb%e5%8a%a0-conan-%e7%9a%84%e4%be%9d%e8%b5%96%e5%8c%85
但是pytorch的include应该还有一个 "/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/include/csrc/api/include"
所有的 includedirs 都是 conan 提供给 xmake 的,如果它没给,这边也没办法。。你可以调下下面的实现部分,看哪里漏了这个
https://github.com/xmake-io/xmake/blob/21aeb8d32ec1892904cc34a57e9c2c3d83b9f1bd/xmake/modules/package/manager/conan/find_package.lua#L74
https://github.com/xmake-io/xmake/blob/21aeb8d32ec1892904cc34a57e9c2c3d83b9f1bd/xmake/scripts/conan/extensions/generators/xmake_generator.py#L55
另外,python, libtorch 这些包,xmake-repo 仓库都有,干嘛不直接用
xmake在装conda的包的时候就是调用了conda install python来安装的,但是我的conda环境里已经有了python了,所以conda会说All requested packages already installed.然后xmake还是找不到这个python。
走 conan/vcpkg 目前只能装库,工具类的不支持,你还不如直接
add_requires("python")
了,优先找系统,没有从 xmake-repo 装,干嘛非得要走 conan。。在xmake.lua里应该怎样加上pytorch跟nvidia这两个channel呢?我试了一下把pytorch的那行改成下面这样:
目前没提供这个参数,不要乱传参。根据文档说明来,xmake.io/#/zh-cn/package/remote_package?id=%e6%b7%bb%e5%8a%a0-conan-%e7%9a%84%e4%be%9d%e8%b5%96%e5%8c%85
但是pytorch的include应该还有一个 "/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/include/csrc/api/include"
所有的 includedirs 都是 conan 提供给 xmake 的,如果它没给,这边也没办法。。你可以调下下面的实现部分,看哪里漏了这个
https://github.com/xmake-io/xmake/blob/21aeb8d32ec1892904cc34a57e9c2c3d83b9f1bd/xmake/modules/package/manager/conan/find_package.lua#L74
https://github.com/xmake-io/xmake/blob/21aeb8d32ec1892904cc34a57e9c2c3d83b9f1bd/xmake/scripts/conan/extensions/generators/xmake_generator.py#L55
另外,python, libtorch 这些包,xmake-repo 仓库都有,干嘛不直接用
首先conda不是conan;其次conda安装的时候确实有顺序问题,不应该简单的一个一个装,这个是需要改进的
关于缺includedir问题,conda里的包是不直接提供c api的,xmake获取的编译信息都是扫描目录出来的;pytorch属于比较复杂的包,扫描不到是正常的。要用conda的pytorch可以设置环境变量Torch_DIR用cmake::Torch引入,不要直接用conda::pytorch
Bot detected the issue body's language is not English, translate it automatically.
When xmake installs the conda package, it calls conda install python to install it. However, my conda environment already has python, so conda will say All requested packages already installed. Then xmake still cannot find the python.
Currently, conan/vcpkg can only install libraries, but tools are not supported. You might as well directly use add_requires("python")
. First find the system. If you don't install from xmake-repo, why do you have to use conan. .
How to add pytorch and nvidia channels to xmake.lua? I tried changing the pytorch line to the following:
This parameter is not currently provided, so do not pass it indiscriminately. According to the documentation, https://xmake.io/#/zh-cn/package/remote_package?id=%e6%b7%bb%e5%8a%a0-conan-%e7%9a%84%e4%be %9d%e8%b5%96%e5%8c%85
But there should be another include for pytorch "/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/include/csrc/api/include"
All includedirs are provided by conan to xmake. If it is not provided, there is nothing we can do here. . You can adjust the implementation part below to see where you missed this
https://github.com/xmake-io/xmake/blob/21aeb8d32ec1892904cc34a57e9c2c3d83b9f1bd/xmake/modules/package/manager/conan/find_package.lua#L74
https://github.com/xmake-io/xmake/blob/21aeb8d32ec1892904cc34a57e9c2c3d83b9f1bd/xmake/scripts/conan/extensions/generators/xmake_generator.py#L55
Bot detected the issue body's language is not English, translate it automatically.
When xmake installs the conda package, it calls conda install python to install it. However, my conda environment already has python, so conda will say All requested packages already installed. Then xmake still cannot find the python. .
Using conan/vcpkg, currently only libraries can be installed, but tools are not supported. You might as well directly use
add_requires("python")
. First, look for the system. If you don't install from xmake-repo, why do you have to use conan? .How to add pytorch and nvidia channels to xmake.lua? I tried changing the pytorch line to the following:
This parameter is currently not provided, so do not pass it indiscriminately. According to the documentation, [xmake.io/#/zh-cn/package/remote_package?id=%e6%b7%bb%e5%8a%a0-conan-%e7%9a%84%e4%be%9d% e8%b5%96%e5%8c%85](https://xmake.io/#/zh-cn/package/remote_package?id=%e6%b7%bb%e5%8a%a0-conan-%e7 %9a%84%e4%be%9d%e8%b5%96%e5%8c%85)
But there should be another include for pytorch "/data3/zhengwenhao/.conda/pkgs/pytorch-1.13.1-cpu_py310ha02dd7b_1/lib/python3.10/site-packages/torch/include/csrc/api/include"
All includedirs are provided by conan to xmake. If it is not provided, there is nothing we can do here. . You can adjust the implementation part below to see where you missed this
https://github.com/xmake-io/xmake/blob/21aeb8d32ec1892904cc34a57e9c2c3d83b9f1bd/xmake/modules/package/manager/conan/find_package.lua#L74
https://github.com/xmake-io/xmake/blob/21aeb8d32ec1892904cc34a57e9c2c3d83b9f1bd/xmake/scripts/conan/extensions/generators/xmake_generator.py#L55
In addition, python, libtorch and other packages are available in the xmake-repo warehouse. Why not use them directly?
First of all, conda is not conan; secondly, there is indeed a sequence problem when conda is installed. It should not be simply installed one by one. This needs to be improved.
Regarding the issue of missing includedir, the packages in conda do not directly provide c api. The compilation information obtained by xmake is scanned in the directory; pytorch is a relatively complex package, so it is normal that it cannot be scanned. To use conda's pytorch, you can set the environment variable Torch_DIR and import it with cmake::Torch. Do not use conda::pytorch directly.
首先conda不是conan;其次conda安装的时候确实有顺序问题,不应该简单的一个一个装,这个是需要改进的
conda 这个我看错了,现在所有三方包管理的包都不支持级联依赖支持,不单单是 conda,vcpkg/conan 我也没做支持。
如果要处理依赖关系,就用 xmake-repo 仓库包
关于缺includedir问题,conda里的包是不直接提供c api的,xmake获取的编译信息都是扫描目录出来的;pytorch属于比较复杂的包,扫描不到是正常的。要用conda的pytorch可以设置环境变量Torch_DIR用cmake::Torch引入,不要直接用conda::pytorch
目前就 conan 应该算是能够比较精准可靠的获取所有 includedirs/links 等信息,其他包管理器自身开放提供的获取库信息有限,只能通过扫描,这原本就是不可靠。。还不如走 xmake-repo 里面的包。
@waruqi
目前没提供这个参数,不要乱传参。根据文档说明来,https://xmake.io/#/zh-cn/package/remote_package?id=%e6%b7%bb%e5%8a%a0-conan-%e7%9a%84%e4%be%9d%e8%b5%96%e5%8c%85
所以现在conda不支持安装顺序跟channel吗?
另外,python, libtorch 这些包,xmake-repo 仓库都有,干嘛不直接用
我试过用xmake-repo的包,我的xmake.lua里是这样写的
add_requires("python 3.10.11", "libtorch 1.13.1", "openmp")
其中这个python的版本跟libtorch的版本我是对好了的,在我的conda环境里在用的版本,是可以正常使用的。但是xmake解析出来的依赖里除了我要求的python 3.10以外,还有一个binary的python 3.11
而安装的时候,这个3.10的python可以正常安装,这个3.11的python安装不上,看起来是在用pip的时候缺了ssl的模块?
Installing collected packages: setuptools, pip
WARNING: The scripts pip3 and pip3.11 are installed in '/data3/zhengwenhao/.xmake/packages/p/python/3.11.3/d8c248d43f4141a9b452cf40280c9f68/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed pip-22.3.1 setuptools-65.5.0
/data3/zhengwenhao/.xmake/packages/p/python/3.11.3/d8c248d43f4141a9b452cf40280c9f68/bin/python -m pip install --upgrade --force-reinstall pip
WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available.
Looking in indexes: https://mirrors.bfsu.edu.cn/pypi/web/simple
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /pypi/web/simple/pip/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /pypi/web/simple/pip/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /pypi/web/simple/pip/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /pypi/web/simple/pip/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError("Can't connect to HTTPS URL because the SSL module is not available.")': /pypi/web/simple/pip/
Could not fetch URL https://mirrors.bfsu.edu.cn/pypi/web/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='mirrors.bfsu.edu.cn', port=443): Max retries exceeded with url: /pypi/web/simple/pip/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping
ERROR: Could not find a version that satisfies the requirement pip (from versions: none)
ERROR: No matching distribution found for pip
WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available.
Could not fetch URL https://mirrors.bfsu.edu.cn/pypi/web/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='mirrors.bfsu.edu.cn', port=443): Max retries exceeded with url: /pypi/web/simple/pip/ (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available.")) - skipping
WARNING: There was an error checking the latest version of pip.
error: @programdir/core/sandbox/modules/os.lua:378: execv(/data3/zhengwenhao/.xmake/packages/p/python/3.11.3/d8c248d43f4141a9b452cf40280c9f68/bin/python -m pip install --upgrade --force-reinstall pip) failed(1)
stack traceback:
[C]: in function 'error'
[@programdir/core/base/os.lua:949]:
[@programdir/core/sandbox/modules/os.lua:378]:
[@programdir/core/sandbox/modules/os.lua:291]: in function 'vrunv'
[...make/repositories/xmake-repo/packages/p/python/xmake.lua:264]: in function 'script'
[...dir/modules/private/action/require/impl/utils/filter.lua:114]: in function 'call'
[.../modules/private/action/require/impl/actions/install.lua:369]:
=> install python#1 3.11.3 .. failed
Bot detected the issue body's language is not English, translate it automatically.
First of all, conda is not conan; secondly, there is indeed a sequence problem when conda is installed. It should not be simply installed one by one. This needs to be improved.
I misunderstood conda. Currently, all third-party package management packages do not support cascading dependency support. Not only conda, but I also do not support vcpkg/conan.
If you want to deal with dependencies, use the xmake-repo warehouse package
Regarding the issue of missing includedir, the packages in conda do not directly provide c api. The compilation information obtained by xmake is scanned in the directory; pytorch is a relatively complex package, so it is normal that it cannot be scanned. To use conda's pytorch, you can set the environment variable Torch_DIR and import it with cmake::Torch. Do not use conda::pytorch directly.
At present, conan should be able to obtain all includedirs/links and other information more accurately and reliably. Other package managers themselves provide limited access to library information and can only scan it, which is inherently unreliable. . It is better to use the packages in xmake-repo.
所以现在conda不支持安装顺序跟channel吗?
是的
add_requires("python 3.10.11", "libtorch 1.13.1", "openmp") 其中这个python的版本跟libtorch的版本我是对好了的,在我的conda环境里在用的版本,是可以正常使用的。但是xmake解析出来的依赖里除了我要求的python 3.10以外,还有一个binary的python 3.11
libtorch 里面依赖了 python,仓库里面用的最新版本 3.11
你外面如果没直接用到 python ,那么不应该配置上,libtorch 包会自动处理依赖,你外面单独加,没有任何用途,删了 add_requires("python 3.10.11")
这个3.11的python安装不上,看起来是在用pip的时候缺了ssl的模块?
可以调下 python 包,或者提个 pr 过来改进下
或者通过 add_requireconfs 改写 libtorch 的 python 版本。。
add_requires("libtorch 1.13.1", "openmp")
add_requireconfs("libtorch.python", {version = "3.10.x", override = true})
@xq114
要用conda的pytorch可以设置环境变量Torch_DIR用cmake::Torch引入,不要直接用conda::pytorch
我也试过这个,cmake这个确实可以找到,我的xmake.lua是这么写的:
add_requires("cmake::Python", {alias = "python", system = true, configs = {components = {"Interpreter", "Development"}}})
-- set CMAKE_PREFIX_PATH from python -c "import torch.utils; print(torch.utils.cmake_prefix_path)"
add_requires("cmake::Torch", {alias = "libtorch", system = true, configs = {envs = {CMAKE_PREFIX_PATH = "/data3/zhengwenhao/.conda/envs/temp/lib/python3.10/site-packages/torch/share/cmake"}, search_mode = "config"}})
但是我现在还遇到了几个问题:
1、这个PATH是我手动写的,cmake那边可以调python来生成这个路径:
execute_process(
COMMAND
${Python_EXECUTABLE} -c
"import torch.utils; print(torch.utils.cmake_prefix_path)"
OUTPUT_STRIP_TRAILING_WHITESPACE
OUTPUT_VARIABLE TORCH_CMAKE_PATH)
xmake这边我试了一下按照 var.$(shell) 这个写法改成
add_requires("cmake::Torch", {alias = "libtorch", system = true, configs = {envs = {Torch_DIR = "($shell python -c 'import torch.utils; print(torch.utils.cmake_prefix_path)')"}, search_mode = "config"}})
好像不对,我print了一下这个
print("($shell python -c 'import torch.utils; print(torch.utils.cmake_prefix_path)')")
它输出是这个
($shell python -c 'import torch.utils; print(torch.utils.cmake_prefix_path)')
好像根本没执行这个命令。
2、xmake的vscode插件好像不支持指定一个conda环境,这么写的话,xmake插件会找不到python跟pytorch。cmake tools那边是可以指定CMAKE_PREFIX_PATH到conda虚拟环境里,然后它就可以优先去conda的虚拟环境里检索包。xmake这边可以定义这个变量吗?我试了一下直接
xmake -vD -DCMAKE_PREFIX_PATH=/data2/zhengwenhao/.conda/envs/pytorch_extension_example
是不行的。
而且如果系统环境里没有cmake还好,xmake会调用conda环境里的cmake来寻找python跟pytorch,那样cmake是可以正常找到的,但是如果系统环境里安装了cmake,xmake就会优先调用/usr/bin/cmake
,而这个cmake找不到conda环境中的python就编译失败了。有办法给xmake指定使用哪个cmake吗?
你外面如果没直接用到 python ,那么不应该配置上,libtorch 包会自动处理依赖,你外面单独加,没有任何用途,删了
add_requires("python 3.10.11")
或者通过 add_requireconfs 改写 libtorch 的 python 版本。。
add_requires("libtorch 1.13.1", "openmp") add_requireconfs("libtorch.python", {version = "3.10.x", override = true})
可以,我用到了python的包,现在我的xmake.lua改成这样了:
add_requires("python 3.10.11")
add_requires("libtorch 1.13.1")
add_requireconfs("libtorch.python", {version = "3.10.x", override = true})
xmake解析出来还是有两个python,都是3.10.11,只不过有一个是binary的,我等会看看它能不能成功安装。
@waruqi 另外,我发现xmake的-j好像并不总是能传递给make?我写xmake -vD -j 64
,结果还是能在htop里看到在用make -j 4
在编译?这是正常的吗?
Bot detected the issue body's language is not English, translate it automatically.
So now conda does not support installation order and channel?
Yes
add_requires("python 3.10.11", "libtorch 1.13.1", "openmp") I have matched the python version with the libtorch version, and the version I am using in my conda environment can be used normally. But in the dependencies parsed by xmake, in addition to the python 3.10 I requested, there is also a binary python 3.11.
libtorch relies on python, and the latest version used in the warehouse is 3.11.
If you don't use python directly outside, you shouldn't configure it. The libtorch package will automatically handle dependencies. If you add it separately outside, it has no use. Delete add_requires("python 3.10.11")
This 3.11 python cannot be installed. It seems that the ssl module is missing when using pip?
You can adjust the python package, or submit a PR to improve it.
Bot detected the issue body's language is not English, translate it automatically.
@xq114
To use conda's pytorch, you can set the environment variable Torch_DIR and import it with cmake::Torch. Do not use conda::pytorch directly.
I also tried this, cmake can indeed be found, my xmake.lua is written like this:
add_requires("cmake::Python", {alias = "python", system = true, configs = {components = {"Interpreter", "Development"}}})
-- set CMAKE_PREFIX_PATH from python -c "import torch.utils; print(torch.utils.cmake_prefix_path)"
add_requires("cmake::Torch", {alias = "libtorch", system = true, configs = {envs = {CMAKE_PREFIX_PATH = "/data3/zhengwenhao/.conda/envs/temp/lib/python3.10/site-packages /torch/share/cmake"}, search_mode = "config"}})
But I still have a few problems:
- I wrote this PATH manually. cmake can adjust python to generate this path:
execute_process(
COMMAND
${Python_EXECUTABLE} -c
"import torch.utils; print(torch.utils.cmake_prefix_path)"
OUTPUT_STRIP_TRAILING_WHITESPACE
OUTPUT_VARIABLE TORCH_CMAKE_PATH)
I tried xmake and changed it to var.$(shell).
add_requires("cmake::Torch", {alias = "libtorch", system = true, configs = {envs = {Torch_DIR = "($shell python -c 'import torch.utils; print(torch.utils.cmake_prefix_path)' )"}, search_mode = "config"}})
It seems wrong, I printed this
print("($shell python -c 'import torch.utils; print(torch.utils.cmake_prefix_path)')")
Its output is this
($shell python -c 'import torch.utils; print(torch.utils.cmake_prefix_path)')
It seems that this command is not executed at all.
- The vscode plug-in of xmake does not seem to support specifying a conda environment. If written this way, the xmake plug-in will not be able to find python and pytorch. On the cmake tools side, you can specify CMAKE_PREFIX_PATH to the conda virtual environment, and then it can give priority to retrieving packages in the conda virtual environment. Can this variable be defined in xmake? I tried it directly
xmake -vD -DCMAKE_PREFIX_PATH=/data2/zhengwenhao/.conda/envs/pytorch_extension_example
It's not possible
Configuration of xmake plug-in:
"xmake.additionalConfigArguments": [
"-DCMAKE_PREFIX_PATH=/data2/zhengwenhao/.conda/envs/pytorch_extension_example"
]
Bot detected the issue body's language is not English, translate it automatically.
If you don’t use python directly outside, you shouldn’t configure it. The libtorch package will automatically handle dependencies. If you add it outside, it has no purpose. Delete
add_requires("python 3.10.11")
Or rewrite the python version of libtorch through add_requireconfs. .
``lua add_requires("libtorch 1.13.1", "openmp") add_requireconfs("libtorch.python", {version = "3.10.x", override = true})
Yes, I used the python package, and now my xmake.lua is changed to this:
add_requires("python 3.10.11")
add_requires("libtorch 1.13.1")
add_requireconfs("libtorch.python", {version = "3.10.x", override = true})
There are still two pythons parsed by xmake, both 3.10.11, but one is binary. I will see if it can be installed successfully.
Bot detected the issue body's language is not English, translate it automatically.
@waruqi In addition, I found that xmake's -j does not seem to always be passed to make? I write xmake -vD -j 64
, but I can still see in htop that I am using make -j 4
to compile? is this normal?
@waruqi 另外,我发现xmake的-j好像并不总是能传递给make?我写
xmake -vD -j 64
,结果还是能在htop里看到在用make -j 4
在编译?这是正常的吗?
加 -vD 看实际编译的传参,只要走 tools.make/cmake 的,都是传了。。
Bot detected the issue body's language is not English, translate it automatically.
@waruqi In addition, I found that xmake's -j does not seem to always be passed to make? I write
xmake -vD -j 64
, but I can still see in htop that I am usingmake -j 4
to compile? is this normal?
Add -vD to see the parameters passed in actual compilation. As long as you run tools.make/cmake, they will all be passed. .
1、这个PATH是我手动写的,cmake那边可以调python来生成这个路径:
execute_process( COMMAND ${Python_EXECUTABLE} -c "import torch.utils; print(torch.utils.cmake_prefix_path)" OUTPUT_STRIP_TRAILING_WHITESPACE OUTPUT_VARIABLE TORCH_CMAKE_PATH)
可以这么写:
add_rules("mode.debug", "mode.release")
add_requires("python")
add_requires("cmake::Torch", {alias = "libtorch", system = true, optional = true})
target("test")
set_kind("binary")
set_languages("c++20")
add_files("src/main_torch.cpp")
add_packages("python")
add_packages("libtorch")
after_load(function (target)
local output = os.iorun("python -c \"import torch.utils; print(torch.utils.cmake_prefix_path)\"")
target:add("runenvs", "CMAKE_PREFIX_PATH", output)
end)
但需要先用xrepo env shell进虚拟环境,然后在虚拟环境里再跑xmake f --check重新探测。这和xmake默认并行执行有关;description scope的各个函数不是依次执行的,add_requires(python)和add_requires(cmake::Torch)之间无法跑额外的脚本控制环境变量,所以只能跑两遍xmake了。
xmake这边我试了一下按照 var.$(shell) 这个写法改成
add_requires("cmake::Torch", {alias = "libtorch", system = true, configs = {envs = {Torch_DIR = "($shell python -c 'import torch.utils; print(torch.utils.cmake_prefix_path)')"}, search_mode = "config"}})
好像不对,我print了一下这个
print("($shell python -c 'import torch.utils; print(torch.utils.cmake_prefix_path)')")
它输出是这个
($shell python -c 'import torch.utils; print(torch.utils.cmake_prefix_path)')
好像根本没执行这个命令。
print没有这个转换的功能,大部分description scope的api都没有这个功能。只有特定几个api支持env,shell,registry这些字符串
2、xmake的vscode插件好像不支持指定一个conda环境,这么写的话,xmake插件会找不到python跟pytorch。cmake tools那边是可以指定CMAKE_PREFIX_PATH到conda虚拟环境里,然后它就可以优先去conda的虚拟环境里检索包。xmake这边可以定义这个变量吗?我试了一下直接
xmake -vD -DCMAKE_PREFIX_PATH=/data2/zhengwenhao/.conda/envs/pytorch_extension_example
是不行的。
手动在命令行config好再用xmake插件?插件不知道有没有自定义环境变量的功能
而且如果系统环境里没有cmake还好,xmake会调用conda环境里的cmake来寻找python跟pytorch,那样cmake是可以正常找到的,但是如果系统环境里安装了cmake,xmake就会优先调用
/usr/bin/cmake
,而这个cmake找不到conda环境中的python就编译失败了。有办法给xmake指定使用哪个cmake吗?
用的是当前命令行的cmake,也就是环境变量里最靠前的,在conda环境里跑xmake就行