TF_Deformable_Net
TF_Deformable_Net copied to clipboard
Is it compatible with cuda9.1 and cudnn7?
As the title.
I am trying to make it work on cuda9.1 and cudnn7. Tried to use gcc-4.9 and -D_GLIBCXX_USE_CXX11_ABI=0
when loading generated .so file, I get error: undefined symbol: _ZTIN10tensorflow8OpKernelE
Do you have any clues on this?
Thanks
Check this issue for more information. Because now I do not have the environment to test it, I am not assured this works. But if it did works for you, could you start a pull request with the right script on your machine? Thanks in advance.
Thanks for the info.
Finally I get to run successfully. Here is the configuration that works for me: Ubuntu 16.04 tensorflow-gpu-1.4.1 cuda-8.0 cudnn6 g++-4.9 Geforce titan x
modify cuda_config.h according to my situation, and manually copied this file to
$TF_INC/tensorflow/stream_executor/cuda/
I also need to link against tensorflow_famework by adding the following to make.sh
TF_LIB=$(python -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())')
and add the following to g++ commands
-D_GLIBCXX_USE_CXX11_ABI=0 -L$TF_LIB -ltensorflow_framework
add the following to all nvcc and g++ commands:
-I /usr/local/lib/python2.7/dist-packages/tensorflow/include/external/nsync/public/
Undefined symbols occurs when: using tensorflow-gpu>1.5, from pip or source, cuda-8.0 or cuda-9.0.
I didn't try tensorflow-gpu-1.4.1 with cuda-9.0.
Hope this helps.
The script I used: cuda_config.h
#ifndef CUDA_CUDA_CONFIG_H_
#define CUDA_CUDA_CONFIG_H_
#define TF_CUDA_CAPABILITIES CudaVersion("5.2")
#define TF_CUDA_VERSION "8.0"
#define TF_CUDNN_VERSION "6"
#define TF_CUDA_TOOLKIT_PATH "/usr/local/cuda-8.0"
#endif // CUDA_CUDA_CONFIG_H_
make.sh (only for deform_conv_layer)
#!/usr/bin/env bash
TF_INC=$(python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())')
TF_LIB=$(python -c 'import tensorflow as tf; print(tf.sysconfig.get_lib())')
TF_INCA=/usr/local/lib/python2.7/dist-packages/tensorflow/include/external/nsync/public/
#/usr/local/lib/python2.7/dist-packages/tensorflow/include/tensorflow/stream_executor/
echo $TF_INC
echo $TF_LIB
CUDA_HOME=/usr/local/cuda/
sudo cp ./cuda_config.h $TF_INC/tensorflow/stream_executor/cuda/
#if [ ! -f $TF_INC/tensorflow/stream_executor/cuda/cuda_config.h ]; then
# cp ./cuda_config.h $TF_INC/tensorflow/stream_executor/cuda/
#fi
cd deform_conv_layer
nvcc -std=c++11 -ccbin=/usr/bin/g++-4.9 -c -o deform_conv.cu.o deform_conv.cu.cc -I $TF_INC -I $TF_INCA -I /usr/local -D\
GOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -L /usr/local/cuda-8.0/lib64/ --expt-relaxed-constexpr -arch=sm_52
## if you install tf using already-built binary, or gcc version 4.x, uncomment the three lines below
g++-4.9 -std=c++11 -shared -o deform_conv.so deform_conv.cc deform_conv.cu.o -I\
$TF_INC -I $TF_INCA -I /usr/local -fPIC -lcudart -L $CUDA_HOME/lib64 -D GOOGLE_CUDA=1 -Wfatal-errors -I\
$CUDA_HOME/include -D_GLIBCXX_USE_CXX11_ABI=0 -L$TF_LIB -ltensorflow_framework
@leinxx Thanks you for the efforts, by the way, according to the comment in make.sh
, Titan X should use the capability of 6.1.
I think you are talking about the Nvidia Titan X. I am using the much lower version: GTX titan X 😶. https://developer.nvidia.com/cuda-gpus
@leinxx Nvidia really doesn't know how to choose a name for its GPU model... Anyway, I have updated the build script and readme. Have a great day!
You can refer this web page: https://developer.nvidia.com/cuda-gpus