ck-tensorflow icon indicating copy to clipboard operation
ck-tensorflow copied to clipboard

Create a TensorFlow 1.7 package with TensorRT support

Open psyhtest opened this issue 7 years ago • 4 comments

TensorFlow 1.7 with CUDA optionally supports TensorRT, so it asks the following during installation:

$ ck install package:lib-tensorflow-1.7.0-src-cuda-xla
...
Do you wish to build TensorFlow with Apache Kafka Platform support? [y/N]: 
No Apache Kafka Platform support will be enabled for TensorFlow.

Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: 
No OpenCL SYCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with TensorRT support? [y/N]: y
TensorRT support will be enabled for TensorFlow.

Please specify the location where TensorRT is installed. [Default is /usr/lib/x86_64-linux-gnu]:/usr/local/TensorRT-3.0.4

Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]:

We could probably automate this by a special package CK forcing the y answer and supplying the path to TensorRT.

psyhtest avatar Mar 31 '18 22:03 psyhtest

This is a bit tricky for a couple of reasons:

  • TensorRT 3.0 needs CUDA 9.0; presumably, TensorFlow should use the same CUDA version.
  • TensorRT 3.0 needs cuDNN 7.0; presumably, TensorFlow should use the same cuDNN version.
  • TensorRT needs to be configured to support either Python 2 or Python 3 (more on this later); presumably, TensorFlow should use the same selected Python version.

psyhtest avatar Mar 31 '18 23:03 psyhtest

To configure TensorRT 3 to support Python 2, I followed the official installation guide.

Install and detect TensorRT 3

$ cd /tmp
$ tar xvzf ~/Downloads/TensorRT-3.0.4.Ubuntu-16.04.3.x86_64.cuda-9.0.cudnn7.0.tar.gz
$ sudo mv /tmp/TensorRT-3.0.4 /usr/local
$ ck detect soft:lib.tensorrt --full_path=/usr/local/TensorRT-3.0.4/lib/libnvinfer.so

Configure TensorRT 3 for Python 2 support

$ ck show env --tags=cuda,v9.0
Env UID:         Target OS: Bits: Name:                Version: Tags:

a9181e930c5b5917   linux-64    64 Nvidia CUDA Compiler 9.0.176  64bits,compiler,cuda,host-os-linux-64,lang-c-cuda,lang-cpp-cuda,target-os-linux-64,v9,v9.0,v9.0.176

$ ck virtual env --tags=cuda,v9.0

Warning: you are in a new shell with a pre-set CK environment. Enter "exit" to return to the original one!

$ sudo -H pip2 install \
--global-option=build_ext \
--global-option="-I$CK_ENV_COMPILER_CUDA_INCLUDE" \
--global-option="-L$CK_ENV_COMPILER_CUDA_LIB"
/usr/local/TensorRT-3.0.4/python/tensorrt-3.0.4-cp27-cp27mu-linux_x86_64.whl

$ sudo -H pip2 install \
/usr/local/TensorRT-3.0.4/uff/uff-0.2.0-py2.py3-none-any.whl

Configure TensorRT paths [optional?]

$ sudo cp /usr/local/lib/python2.7/dist-packages/tensorrt/examples/custom_layers/tensorrtplugins/setup.py{,~}
$ sudo vim /usr/local/lib/python2.7/dist-packages/tensorrt/examples/custom_layers/tensorrtplugins/setup.py
$ diff /usr/local/lib/python2.7/dist-packages/tensorrt/examples/custom_layers/tensorrtplugins/setup.py{~,}
67c67
< CUDA_DIR = os.environ.get("CUDA_ROOT_DIR", '/usr/local/cuda')
---
> CUDA_DIR = os.environ.get("CUDA_ROOT_DIR", '/usr/local/cuda-9.0.176')
69,70c69,70
< TENSORRT_INC_DIR = '/usr/include/x86_64-linux-gnu'
< TENSORRT_LIB_DIR = '/usr/lib/x86_64-linux-gnu'
---
> TENSORRT_INC_DIR = '/usr/local/TensorRT-3.0.4/include'
> TENSORRT_LIB_DIR = '/usr/local/TensorRT-3.0.4/lib'

Test TensorRT 3 with Python 2 support

$ cd /usr/local/TensorRT-3.0.4/samples/sampleMNIST && make

$ ck show env --tags=tensorrt,v3.0.4
Env UID:         Target OS: Bits: Name:           Version: Tags:

bcbfcd499b8fc4c8   linux-64    64 TensorRT engine 3.0.4    64bits,host-os-linux-64,inference,lib,target-os-linux-64,tensorrt,v3,v3.0,v3.0.4

$ ck show env --tags=cudnn
Env UID:         Target OS: Bits: Name:         Version: Tags:

6a072f48be44550c   linux-64    64 cuDNN library 7.0.5    64bits,cuda,cudnn,dnn,host-os-linux-64,lib,target-os-linux-64,v7,v7.0,v7.0.5

$ ck virtual env --tags=tensorrt,v3.0.4
$ ck virtual env --tags=cudnn

$ /usr/local/TensorRT-3.0.4/bin/sample_mnist

---------------------------



@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@*.  .*@@@@@@@@@@@
@@@@@@@@@@*.     +@@@@@@@@@@
@@@@@@@@@@. :#+   %@@@@@@@@@
@@@@@@@@@@.:@@@+  +@@@@@@@@@
@@@@@@@@@@.:@@@@: +@@@@@@@@@
@@@@@@@@@@=%@@@@: +@@@@@@@@@
@@@@@@@@@@@@@@@@# +@@@@@@@@@
@@@@@@@@@@@@@@@@* +@@@@@@@@@
@@@@@@@@@@@@@@@@: +@@@@@@@@@
@@@@@@@@@@@@@@@@: +@@@@@@@@@
@@@@@@@@@@@@@@@* .@@@@@@@@@@
@@@@@@@@@@%**%@. *@@@@@@@@@@
@@@@@@@@%+.  .: .@@@@@@@@@@@
@@@@@@@@=  ..   :@@@@@@@@@@@
@@@@@@@@: *@@:  :@@@@@@@@@@@
@@@@@@@%  %@*    *@@@@@@@@@@
@@@@@@@%  ++  ++ .%@@@@@@@@@
@@@@@@@@-    +@@- +@@@@@@@@@
@@@@@@@@=  :*@@@# .%@@@@@@@@
@@@@@@@@@+*@@@@@%.  %@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@


0: 
1: 
2: **********
3: 
4: 
5: 
6: 
7: 
8: 
9:

psyhtest avatar Mar 31 '18 23:03 psyhtest

From the user perspective, it seems that the easiest way of installing TensorRT with Python support would be to download an archive from NVIDIA's website (registration is required) and then launch a special TensorRT package that would extract the archive and configure TensorRT depending on the selected Python version.

psyhtest avatar Mar 31 '18 23:03 psyhtest

Just a note that I recently added flag "ck install package --reuse_deps" which try to reuse all selected deps in further sub-deps. I guess if we order correctly all deps, we can make sure that CUDA 9+, GCC 6+, Python 2.x are used ... But we need to check it further. In a longer future, it would be nice to have a proper API to customize/define such cases via Python...

gfursin avatar Apr 01 '18 08:04 gfursin