tiny-cuda-nn
tiny-cuda-nn copied to clipboard
undefined symbol: _ZN3c104impl8GPUTrace13gpuTraceStateE with Pytorch extension
I installed the Pytorch extension with the command
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
When I try import tinycudann
, I got this error
Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/leo/anaconda3/envs/sdfstudio/lib/python3.10/site-packages/tinycudann/__init__.py", line 9, in <module> from tinycudann.modules import free_temporary_memory, NetworkWithInputEncoding, Network, Encoding File "/home/leo/anaconda3/envs/sdfstudio/lib/python3.10/site-packages/tinycudann/modules.py", line 50, in <module> _C = importlib.import_module(f"tinycudann_bindings._{cc}_C") File "/home/leo/anaconda3/envs/sdfstudio/lib/python3.10/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ImportError: /home/leo/anaconda3/envs/sdfstudio/lib/python3.10/site-packages/tinycudann_bindings/_61_C.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN3c104impl8GPUTrace13gpuTraceStateE
I tired this in following environment:
Ubuntu 22.04
python 10
pytroch 1.12.1+cu113
@githubLeoliu did you resolve the issue? If yes, how?
I have the same problem
@githubLeoliu I meet the same problem. So, did you resolve the issue?
Anybody solved this problem?
I had this issue. I pip uninstalled tinycudann and reinstalled it. It seemed to work after.
This is probably due to a mismatch of torch/nvcc cuda libraries. I'd recommend building in a container:
FROM nvidia/cuda:11.8.0-devel-ubuntu22.04
RUN apt-get update && apt-get install -y --no-install-recommends \
curl \
git \
cmake \
&& rm -rf /var/lib/apt/lists/*
# Install micromamba
RUN curl -Ls https://micro.mamba.pm/api/micromamba/linux-64/latest | tar -xvj -C / bin/micromamba
# Create environment
ENV MAMBA_ROOT_PREFIX /micromamba
RUN micromamba create -n tcnn python=3.8 pytorch torchvision -c nvidia -c pytorch -c conda-forge
ENV CUDA_ARCHITECTURES="75;86;89"
ENV CMAKE_CUDA_ARCHITECTURES=${CUDA_ARCHITECTURES}
ENV TCNN_CUDA_ARCHITECTURES=${CUDA_ARCHITECTURES}
ENV TORCH_CUDA_ARCH_LIST="7.5 8.6 8.9+PTX"
ENV FORCE_CUDA="1"
# RUN git clone --recursive https://github.com/nvlabs/tiny-cuda-nn && \
# cd tiny-cuda-nn && \
# cmake . -B build && \
# cmake --build build --config Release -j
# RUN cd bindings/torch && micromamba run -n tcnn python setup.py install
RUN micromamba run -n tcnn pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
FROM nvidia/cuda:11.8.0-base-ubuntu22.04
COPY --from=0 /bin/micromamba ./bin/micromamba
COPY --from=0 /micromamba ./
COPY --from=0 /tiny-cuda-nn ./
ENV PATH=/micromamba/envs/tcnn/bin:$PATH
If you want to change the cuda version, make sure you change 11.8 in three places:
- the base cuda image (devel)
- the version pulled in by conda
- the version inherited for the base container
If you want to run on a GPU that is not in the architectures listed, keep the same style/ordering but add your architecture from here https://developer.nvidia.com/cuda-gpus
I solved this issue. What's happening is an issue with how your torch is getting changed after installing certain packages (like nerfstudio). Reinstall your version of torch with cuda. I also had to downgrade functorch to 0.2.1. Try importing tinycudann in python3
it may require reinstalling tinycudann, which will take a while
I reinstalled the entire pack (torch, tinycudann, nerfstudio) and escaped from this problem. These are the order I installed those 3 packages. Hope it works for you too.
pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 -f https://download.pytorch.org/whl/torch_stable.html
pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
# installing tinycudann manually with setup.py should also work
cd path/to/nerfstudio
pip install -e .
# make sure that the torch+cu that you previously installed has the correct version, so that it won't be installed over and over again