vision
vision copied to clipboard
RuntimeError: Not compiled with GPU decoder support.
🐛 Describe the bug
When I use the following command, I get an error:
“”“
import torchvision
reader = torchvision.io.VideoReader('/home/ljf/vision/test4.mp4', device='cuda:0')
”“”
“”“
Traceback (most recent call last):
File "
I compile and install torchvision based on the following steps: """ conda uninstall ffmpeg pip uninstall torchvision
export TORCHVISION_INCLUDE=/home/ljf/Video_Codec_SDK_11.1.5/Interface/ export TORCHVISION_LIBRARY=/home/ljf/Video_Codec_SDK_11.1.5/Lib/linux/stubs/x86_64/
conda install -c conda-forge ffmpeg export CUDA_HOME=/usr/local/cuda-11.3
python setup.py install """ The entire process of compiling and installing did not report any errors. I don't know why I can't read the video stream using cuda.
Versions
root@b0b26070e5b8:/home/ljf/pytorch/torch/utils# python collect_env.py Collecting environment information... PyTorch version: 1.11.0 Is debug build: False CUDA used to build PyTorch: 11.3 ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.27
Python version: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.11.0-49-generic-x86_64-with-glibc2.17 Is CUDA available: True CUDA runtime version: 11.3.109 GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Nvidia driver version: 470.86 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.2.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.2.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.2.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.2.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.2.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.2.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.2.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.21.2
[pip3] torch==1.11.0
[pip3] torchelastic==0.2.2
[pip3] torchtext==0.12.0
[pip3] torchvision==0.14.0a0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 ha36c431_9 nvidia
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.21.2 py38h20f2e39_0
[conda] numpy-base 1.21.2 py38h79a1101_0
[conda] pytorch 1.11.0 py3.8_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchelastic 0.2.2 pypi_0 pypi
[conda] torchtext 0.12.0 py38 pytorch
[conda] torchvision 0.14.0a0 pypi_0 pypi
I compile and install in a docker image. pytorch docker image
My installation process log: ''' docker pull pytorch/pytorch:1.11.0-cuda11.3-cudnn8-devel
conda create -n torch python=3.8 conda activate torch
conda install pytorch torchaudio cudatoolkit=11.3 -c pytorch
export TORCHVISION_INCLUDE=/home/ljf/Video_Codec_SDK_11.1.5/Interface/ export TORCHVISION_LIBRARY=/home/ljf/Video_Codec_SDK_11.1.5/Lib/linux/stubs/x86_64/
conda install -c conda-forge ffmpeg=4.2
export CUDA_HOME=/usr/local/cuda-11.3
git clone https://github.com/pytorch/vision.git cd vision git checkout v0.12.0 python setup.py install '''
The entire compilation and installation process did not report any errors.
When I use the command "torchvision.io.VideoReader", the following error is reported.
I also tried other installation methods and torchvision.io.VideoReader works without cuda. It still has the following error: ''' Traceback (most recent call last): File "", line 1, in File "/home/ljf/vision/torchvision/io/video_reader.py", line 97, in init raise RuntimeError("Not compiled with GPU decoder support.") RuntimeError: Not compiled with GPU decoder support. '''
I ran into the same issue. Apparently I was missing bsf.h
under the libavcodec include dir. I just copied it from the source code to the ffmpeg_incude_dir (in my case this was /usr/local/include/libavcodec/
, you can see what your directory is if you print the ffmpeg_include_dir
that is used here. It solved my problem