torchsparse icon indicating copy to clipboard operation
torchsparse copied to clipboard

[Installation] <title>

Open zx1384187 opened this issue 10 months ago • 8 comments

Is there an existing issue for this?

  • [x] I have searched the existing issues

Have you followed all the steps in the FAQ?

  • [x] I have tried the steps in the FAQ.

Current Behavior

When I try to run"pip install ./torchsparse" ,it returns error " torchsparse/backend/others/query_cpu.cpp:6:10: fatal error: google/dense_hash_map: No such file or directory".Before this I have made install sparsehash like https://github.com/PJLab-ADG/OpenPCSeg/blob/master/docs/INSTALL.md 4.3 Could someone give me assiant?

Error Line

torchsparse/backend/others/query_cpu.cpp:6:10: fatal error: google/dense_hash_map: No such file or directory 6 | #include <google/dense_hash_map> | ^~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. error: command 'gcc' failed with exit status 1

Environment

- GCC:9.3.0
- NVCC:11.3
- PyTorch:1.10.0
- PyTorch CUDA:11.3

Full Error Log

Error Log Looking in indexes: http://mirrors.aliyun.com/pypi/simple Processing ./torchsparse DEPRECATION: A future pip version will change local packages to be built in-place without first copying to a temporary directory. We recommend you use --use-feature=in-tree-build to test your packages with this new behavior before it becomes the default. pip 21.3 will remove support for this functionality. You can find discussion regarding this at https://github.com/pypa/pip/issues/7555. Building wheels for collected packages: torchsparse Building wheel for torchsparse (setup.py) ... error ERROR: Command errored out with exit status 1: command: /root/miniconda3/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-hotmfuez/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-hotmfuez/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-gev9gav0 cwd: /tmp/pip-req-build-hotmfuez/ Complete output (72 lines): running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.8 creating build/lib.linux-x86_64-3.8/torchsparse copying torchsparse/version.py -> build/lib.linux-x86_64-3.8/torchsparse copying torchsparse/__init__.py -> build/lib.linux-x86_64-3.8/torchsparse copying torchsparse/operators.py -> build/lib.linux-x86_64-3.8/torchsparse copying torchsparse/tensor.py -> build/lib.linux-x86_64-3.8/torchsparse creating build/lib.linux-x86_64-3.8/torchsparse/utils copying torchsparse/utils/utils.py -> build/lib.linux-x86_64-3.8/torchsparse/utils copying torchsparse/utils/quantize.py -> build/lib.linux-x86_64-3.8/torchsparse/utils copying torchsparse/utils/__init__.py -> build/lib.linux-x86_64-3.8/torchsparse/utils copying torchsparse/utils/collate.py -> build/lib.linux-x86_64-3.8/torchsparse/utils creating build/lib.linux-x86_64-3.8/torchsparse/backbones copying torchsparse/backbones/__init__.py -> build/lib.linux-x86_64-3.8/torchsparse/backbones copying torchsparse/backbones/unet.py -> build/lib.linux-x86_64-3.8/torchsparse/backbones copying torchsparse/backbones/resnet.py -> build/lib.linux-x86_64-3.8/torchsparse/backbones creating build/lib.linux-x86_64-3.8/torchsparse/nn copying torchsparse/nn/__init__.py -> build/lib.linux-x86_64-3.8/torchsparse/nn creating build/lib.linux-x86_64-3.8/torchsparse/backbones/modules copying torchsparse/backbones/modules/blocks.py -> build/lib.linux-x86_64-3.8/torchsparse/backbones/modules copying torchsparse/backbones/modules/__init__.py -> build/lib.linux-x86_64-3.8/torchsparse/backbones/modules creating build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/devoxelize.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/voxelize.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/query.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/pooling.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/activation.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/__init__.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/count.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/hash.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/conv.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/crop.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/downsample.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional creating build/lib.linux-x86_64-3.8/torchsparse/nn/modules copying torchsparse/nn/modules/norm.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/modules copying torchsparse/nn/modules/bev.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/modules copying torchsparse/nn/modules/activation.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/modules copying torchsparse/nn/modules/__init__.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/modules copying torchsparse/nn/modules/conv.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/modules copying torchsparse/nn/modules/crop.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/modules copying torchsparse/nn/modules/pooling.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/modules creating build/lib.linux-x86_64-3.8/torchsparse/nn/utils copying torchsparse/nn/utils/kernel.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/utils copying torchsparse/nn/utils/apply.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/utils copying torchsparse/nn/utils/__init__.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/utils running build_ext building 'torchsparse.backend' extension creating build/temp.linux-x86_64-3.8 creating build/temp.linux-x86_64-3.8/torchsparse creating build/temp.linux-x86_64-3.8/torchsparse/backend creating build/temp.linux-x86_64-3.8/torchsparse/backend/others creating build/temp.linux-x86_64-3.8/torchsparse/backend/hash creating build/temp.linux-x86_64-3.8/torchsparse/backend/hashmap creating build/temp.linux-x86_64-3.8/torchsparse/backend/devoxelize creating build/temp.linux-x86_64-3.8/torchsparse/backend/voxelize creating build/temp.linux-x86_64-3.8/torchsparse/backend/convolution gcc -pthread -B /root/miniconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/root/miniconda3/lib/python3.8/site-packages/torch/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/TH -I/root/miniconda3/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/root/miniconda3/include/python3.8 -c torchsparse/backend/pybind_cuda.cpp -o build/temp.linux-x86_64-3.8/torchsparse/backend/pybind_cuda.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ /usr/local/cuda/bin/nvcc -I/root/miniconda3/lib/python3.8/site-packages/torch/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/TH -I/root/miniconda3/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/root/miniconda3/include/python3.8 -c torchsparse/backend/others/query_cuda.cu -o build/temp.linux-x86_64-3.8/torchsparse/backend/others/query_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -O3 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14 gcc -pthread -B /root/miniconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/root/miniconda3/lib/python3.8/site-packages/torch/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/TH -I/root/miniconda3/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/root/miniconda3/include/python3.8 -c torchsparse/backend/others/count_cpu.cpp -o build/temp.linux-x86_64-3.8/torchsparse/backend/others/count_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ /usr/local/cuda/bin/nvcc -I/root/miniconda3/lib/python3.8/site-packages/torch/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/TH -I/root/miniconda3/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/root/miniconda3/include/python3.8 -c torchsparse/backend/others/count_cuda.cu -o build/temp.linux-x86_64-3.8/torchsparse/backend/others/count_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -O3 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14 gcc -pthread -B /root/miniconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/root/miniconda3/lib/python3.8/site-packages/torch/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/TH -I/root/miniconda3/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/root/miniconda3/include/python3.8 -c torchsparse/backend/others/query_cpu.cpp -o build/temp.linux-x86_64-3.8/torchsparse/backend/others/query_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ torchsparse/backend/others/query_cpu.cpp:6:10: fatal error: google/dense_hash_map: No such file or directory 6 | #include | ^~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. error: command 'gcc' failed with exit status 1 ---------------------------------------- ERROR: Failed building wheel for torchsparse Running setup.py clean for torchsparse Failed to build torchsparse Installing collected packages: torchsparse Running setup.py install for torchsparse ... error ERROR: Command errored out with exit status 1: command: /root/miniconda3/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-hotmfuez/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-hotmfuez/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-pkv0itmw/install-record.txt --single-version-externally-managed --compile --install-headers /root/miniconda3/include/python3.8/torchsparse cwd: /tmp/pip-req-build-hotmfuez/ Complete output (72 lines): running install running build running build_py creating build creating build/lib.linux-x86_64-3.8 creating build/lib.linux-x86_64-3.8/torchsparse copying torchsparse/version.py -> build/lib.linux-x86_64-3.8/torchsparse copying torchsparse/__init__.py -> build/lib.linux-x86_64-3.8/torchsparse copying torchsparse/operators.py -> build/lib.linux-x86_64-3.8/torchsparse copying torchsparse/tensor.py -> build/lib.linux-x86_64-3.8/torchsparse creating build/lib.linux-x86_64-3.8/torchsparse/utils copying torchsparse/utils/utils.py -> build/lib.linux-x86_64-3.8/torchsparse/utils copying torchsparse/utils/quantize.py -> build/lib.linux-x86_64-3.8/torchsparse/utils copying torchsparse/utils/__init__.py -> build/lib.linux-x86_64-3.8/torchsparse/utils copying torchsparse/utils/collate.py -> build/lib.linux-x86_64-3.8/torchsparse/utils creating build/lib.linux-x86_64-3.8/torchsparse/backbones copying torchsparse/backbones/__init__.py -> build/lib.linux-x86_64-3.8/torchsparse/backbones copying torchsparse/backbones/unet.py -> build/lib.linux-x86_64-3.8/torchsparse/backbones copying torchsparse/backbones/resnet.py -> build/lib.linux-x86_64-3.8/torchsparse/backbones creating build/lib.linux-x86_64-3.8/torchsparse/nn copying torchsparse/nn/__init__.py -> build/lib.linux-x86_64-3.8/torchsparse/nn creating build/lib.linux-x86_64-3.8/torchsparse/backbones/modules copying torchsparse/backbones/modules/blocks.py -> build/lib.linux-x86_64-3.8/torchsparse/backbones/modules copying torchsparse/backbones/modules/__init__.py -> build/lib.linux-x86_64-3.8/torchsparse/backbones/modules creating build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/devoxelize.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/voxelize.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/query.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/pooling.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/activation.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/__init__.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/count.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/hash.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/conv.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/crop.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional copying torchsparse/nn/functional/downsample.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/functional creating build/lib.linux-x86_64-3.8/torchsparse/nn/modules copying torchsparse/nn/modules/norm.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/modules copying torchsparse/nn/modules/bev.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/modules copying torchsparse/nn/modules/activation.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/modules copying torchsparse/nn/modules/__init__.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/modules copying torchsparse/nn/modules/conv.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/modules copying torchsparse/nn/modules/crop.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/modules copying torchsparse/nn/modules/pooling.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/modules creating build/lib.linux-x86_64-3.8/torchsparse/nn/utils copying torchsparse/nn/utils/kernel.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/utils copying torchsparse/nn/utils/apply.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/utils copying torchsparse/nn/utils/__init__.py -> build/lib.linux-x86_64-3.8/torchsparse/nn/utils running build_ext building 'torchsparse.backend' extension creating build/temp.linux-x86_64-3.8 creating build/temp.linux-x86_64-3.8/torchsparse creating build/temp.linux-x86_64-3.8/torchsparse/backend creating build/temp.linux-x86_64-3.8/torchsparse/backend/others creating build/temp.linux-x86_64-3.8/torchsparse/backend/hash creating build/temp.linux-x86_64-3.8/torchsparse/backend/hashmap creating build/temp.linux-x86_64-3.8/torchsparse/backend/devoxelize creating build/temp.linux-x86_64-3.8/torchsparse/backend/voxelize creating build/temp.linux-x86_64-3.8/torchsparse/backend/convolution gcc -pthread -B /root/miniconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/root/miniconda3/lib/python3.8/site-packages/torch/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/TH -I/root/miniconda3/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/root/miniconda3/include/python3.8 -c torchsparse/backend/pybind_cuda.cpp -o build/temp.linux-x86_64-3.8/torchsparse/backend/pybind_cuda.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ /usr/local/cuda/bin/nvcc -I/root/miniconda3/lib/python3.8/site-packages/torch/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/TH -I/root/miniconda3/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/root/miniconda3/include/python3.8 -c torchsparse/backend/others/query_cuda.cu -o build/temp.linux-x86_64-3.8/torchsparse/backend/others/query_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -O3 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14 gcc -pthread -B /root/miniconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/root/miniconda3/lib/python3.8/site-packages/torch/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/TH -I/root/miniconda3/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/root/miniconda3/include/python3.8 -c torchsparse/backend/others/count_cpu.cpp -o build/temp.linux-x86_64-3.8/torchsparse/backend/others/count_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ /usr/local/cuda/bin/nvcc -I/root/miniconda3/lib/python3.8/site-packages/torch/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/TH -I/root/miniconda3/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/root/miniconda3/include/python3.8 -c torchsparse/backend/others/count_cuda.cu -o build/temp.linux-x86_64-3.8/torchsparse/backend/others/count_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -O3 -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14 gcc -pthread -B /root/miniconda3/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/root/miniconda3/lib/python3.8/site-packages/torch/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/root/miniconda3/lib/python3.8/site-packages/torch/include/TH -I/root/miniconda3/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/root/miniconda3/include/python3.8 -c torchsparse/backend/others/query_cpu.cpp -o build/temp.linux-x86_64-3.8/torchsparse/backend/others/query_cpu.o -g -O3 -fopenmp -lgomp -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=backend -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ torchsparse/backend/others/query_cpu.cpp:6:10: fatal error: google/dense_hash_map: No such file or directory 6 | #include | ^~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. error: command 'gcc' failed with exit status 1 ---------------------------------------- ERROR: Command errored out with exit status 1: /root/miniconda3/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-hotmfuez/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-hotmfuez/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-pkv0itmw/install-record.txt --single-version-externally-managed --compile --install-headers /root/miniconda3/include/python3.8/torchsparse Check the logs for full command output. [PUT YOUR ERROR LOG HERE]

zx1384187 avatar Feb 12 '25 06:02 zx1384187

Could you try python setup.py install inside torchsparse folder? I also wrote a tentative solution in #340. Please give it a try and let me know your experience if you like. (But I think your current gcc/nvcc/pytorch combination should work)

pphuangyi avatar Feb 16 '25 17:02 pphuangyi

你能python setup.py install在 torchsparse 文件夹中尝试一下吗?我在#340中也写了一个暂定的解决方案。请尝试一下,如果你愿意的话,请告诉我你的体验。(但我认为你当前的gcc/ nvcc/pytorch组合应该有效) 我尝试了,依旧无效

Peach-tao622 avatar Mar 04 '25 13:03 Peach-tao622

我尝试了,依旧无效

I am so sorry to hear that.

I had a link that doesn't work in the post (the one for using cuda-11.8) and it is kind of essential because installation will fail instantaneously because of cuda and PyTorch mismatch.

I understand it can be a lot of moving parts in a software development environment and hence a lot of things may cause the installation to fail.

So, if you really need the library for your research, I may be able to help with more detail. It may not guarantee to work, but at least we can give it a try.

I also speak Chinese (just cannot type Chinese on my work laptop)

pphuangyi avatar Mar 05 '25 18:03 pphuangyi

我尝试了,依旧无效

I am so sorry to hear that.

I had a link that doesn't work in the post (the one for using cuda-11.8) and it is kind of essential because installation will fail instantaneously because of cuda and PyTorch mismatch.

I understand it can be a lot of moving parts in a software development environment and hence a lot of things may cause the installation to fail.

So, if you really need the library for your research, I may be able to help with more detail. It may not guarantee to work, but at least we can give it a try.

I also speak Chinese (just cannot type Chinese on my work laptop)

I have resolved the aforementioned issue. When setting up the CUDA environment, I used the following commands to install CUDA and PyTorch: conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit -y conda install pytorch==2.1.0 torchvision==0.16.0 pytorch-cuda=11.8 -c pytorch -c nvidia -y This approach ensures that the CUDA version displayed by nvcc --version matches the one installed on the physical machine. Additionally, for PyTorch versions ≥ 1.13, CUDA should be installed from the NVIDIA channel using: conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit -y This replaces the older command: conda install cudatoolkit==11.8 After installing CUDA and PyTorch in the virtual environment, I installed torchsparse using the following commands:

Install torchsparse dependencies

conda install bioconda::google-sparsehash git clone https://github.com/mit-han-lab/torchsparse.git cd torchsparse python setup.py install This setup ensures a consistent and compatible CUDA environment for PyTorch and TorchSparse.

Peach-tao622 avatar Mar 06 '25 08:03 Peach-tao622

我尝试了,依旧无效

I am so sorry to hear that. I had a link that doesn't work in the post (the one for using cuda-11.8) and it is kind of essential because installation will fail instantaneously because of cuda and PyTorch mismatch. I understand it can be a lot of moving parts in a software development environment and hence a lot of things may cause the installation to fail. So, if you really need the library for your research, I may be able to help with more detail. It may not guarantee to work, but at least we can give it a try. I also speak Chinese (just cannot type Chinese on my work laptop)

I have resolved the aforementioned issue. When setting up the CUDA environment, I used the following commands to install CUDA and PyTorch: conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit -y conda install pytorch==2.1.0 torchvision==0.16.0 pytorch-cuda=11.8 -c pytorch -c nvidia -y This approach ensures that the CUDA version displayed by nvcc --version matches the one installed on the physical machine. Additionally, for PyTorch versions ≥ 1.13, CUDA should be installed from the NVIDIA channel using: conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit -y This replaces the older command: conda install cudatoolkit==11.8 After installing CUDA and PyTorch in the virtual environment, I installed torchsparse using the following commands:

Install torchsparse dependencies

conda install bioconda::google-sparsehash git clone https://github.com/mit-han-lab/torchsparse.git cd torchsparse python setup.py install This setup ensures a consistent and compatible CUDA environment for PyTorch and TorchSparse.

Awesome! Thank you for sharing the details of your solution.

Have fun hacking TorchSparse. It can be improved in so many different ways!

pphuangyi avatar Mar 06 '25 15:03 pphuangyi

我尝试了,依旧无效

I am so sorry to hear that. I had a link that doesn't work in the post (the one for using cuda-11.8) and it is kind of essential because installation will fail instantaneously because of cuda and PyTorch mismatch. I understand it can be a lot of moving parts in a software development environment and hence a lot of things may cause the installation to fail. So, if you really need the library for your research, I may be able to help with more detail. It may not guarantee to work, but at least we can give it a try. I also speak Chinese (just cannot type Chinese on my work laptop)

I have resolved the aforementioned issue. When setting up the CUDA environment, I used the following commands to install CUDA and PyTorch: conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit -y conda install pytorch==2.1.0 torchvision==0.16.0 pytorch-cuda=11.8 -c pytorch -c nvidia -y This approach ensures that the CUDA version displayed by nvcc --version matches the one installed on the physical machine. Additionally, for PyTorch versions ≥ 1.13, CUDA should be installed from the NVIDIA channel using: conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit -y This replaces the older command: conda install cudatoolkit==11.8 After installing CUDA and PyTorch in the virtual environment, I installed torchsparse using the following commands:

Install torchsparse dependencies

conda install bioconda::google-sparsehash git clone https://github.com/mit-han-lab/torchsparse.git cd torchsparse python setup.py install This setup ensures a consistent and compatible CUDA environment for PyTorch and TorchSparse.

conda install -c conda-forge google-sparsehash

datouready avatar Apr 28 '25 10:04 datouready

google-sparsehash https://code.google.com/archive/sparsehash That thing is dead, can this lib use a different hash map ?

Dariusz1989 avatar Jun 02 '25 13:06 Dariusz1989

Hello everyone, use sudo apt-get install libsparsehash-dev, it resolves the issue

MRiabov avatar Jun 15 '25 20:06 MRiabov