tensorrt icon indicating copy to clipboard operation
tensorrt copied to clipboard

GTC 2021 Notebooks - Open In Colab added

Open DEKHTIARJonathan opened this issue 4 years ago • 2 comments

@tfeher for review

DEKHTIARJonathan avatar Mar 30 '21 18:03 DEKHTIARJonathan

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Thanks @DEKHTIARJonathan for the PR! Before we merge this, the notebooks should be updated to explain how to run TF-TRT in Colab. Here is some code to get started:

## 1. Requirements

# This notebook requires at least TF 2.5 and TRT 7.1.3. Skip this section if these are already available to you.

### 1.1. Try this notebook on Colab!

# We warmly invite you to try this notebook for yourself on Google Colab and test how TF-TRT could help your daily workloads:
#
# <a href="https://colab.research.google.com/github/tensorflow/tensorrt/blob/master/tftrt/examples/presentations/GTC-April2021-Dynamic-shape-BERT.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab" style="width: 200px;"/></a><br>

### 1.2 GPU
# You need an NVIDIA GPU to execute this notebook. On Colab, you can enable GPU accelerator via the menu Runtime => Change runtime type.
# 
# We check that a GPU is available:

!nvidia-smi

### 1.3 TF 2.5+
# Install a TF package with version at least 2.5

!pip install tf-nightly-gpu # or tensorflow-gpu-2.5.0rc0

# Confirm TF version and that the GPU device is visible

import tensorflow as tf
print('TF version', tf.__version__)
print(tf.config.list_physical_devices('GPU'))

### 1.4 TensorRT 7
#### 1.4.1 Check TensorRT version
# The TensorRT version has to match the version used while building the TF package. 

# Check TRT version needed by TF
from tensorflow.compiler.tf2tensorrt import _pywrap_py_utils as trt_py_utils
trt_py_utils.get_linked_tensorrt_version()

# Check the intalled TensorRT version (empty output means that TRT is not installed).

!dpkg -l | grep libnvinfer

#### 1.4.2 Install TensorRT
- TRT version has to match the one used to compile the TF package (see above)
- The CUDA version has to match the one installed on the system.

# Check CUDA version

!nvcc --version

%%bash

wget https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb

# Note: TRT version has to match the one used to compile the TF package.
#       The CUDA version has to match the one installed on the system.
version="7.2.2-1+cuda11.0"

dpkg -i nvidia-machine-learning-repo-*.deb
apt-get update

sudo apt-get install libnvinfer7=${version} libnvinfer-plugin7=${version} -y

Problem For me, the step print(tf.config.list_physical_devices('GPU')) did not give a valid GPU on colab after I have pip installed TF 2.5rc0 or tf-nightly-gpu.

Alterative option is to run the notebooks in the TF nightly docker image. There we have CUDA 11.2 by default, but the TRT packages for that are not yet available. One can still get it working by installing TRT for cuda 11.1, and exporting the 11.1 library packages as a fallback option: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/nvidia/lib:/usr/local/cuda-11.1/lib64

tfeher avatar Apr 13 '21 14:04 tfeher