Environment Setup for Deep Model Deployment: NVIDIA + Tensorflow-GPU
Record on one hobby work:)
“Environment Setup for Deep Model Deployment: NVIDIA + Tensorflow-GPU”
Background
Hardware: Jetson Nano
Flash/Re-flash
After flashing with JetPack 4.5.1 using NVIDIA SDK Manager
| Sys/Lib | Version |
|---|---|
| Ubuntu | 18.04 |
| Python | 3.6.8 |
| CUDA | 10.2 |
| CuDNN | 8.0 |
| TensorRT | 7.1.1 |
| OpenCV | 4.1.2 |
| GCC | 7.5.1 |
TensorFlow-GPU C++ API
Overview
-
Bazel is a necessary build tool for TensorFlow C++ API. Please find a proper Bazel version according to the table below.
-
Java: don’t forget Java (JDK11) for installing Bazel 3.1.0: sudo apt-get install openjdk-11-jdk
-
Since Bazel with version 0.xx.x is too old to guarantee any succeed on building TensorFlow, here we chose using Bazel 3.1.0 to build TensorFlow-GPU 2.3.1. At meantime, the required Python, CUDA, CuDNN and GCC are very close to the current ones on the Jetson.
-
Table of tested build configuration:
Preparation
Bazel Installation
Download Bazel and unzip it:
wget https://github.com/bazelbuild/bazel/releases/download/3.1.0/bazel-3.1.0-dist.zip
unzip -d bazel bazel-3.1.0-dist.zip and change directory: cd bazel
Run the command for building Bazel: Change directory:
cd bazel
Start the build:
env EXTRA_BAZEL_ARGS="--host_javabase=@local_jdk//:jdk" bash ./compile.sh
Copy the binaries:
sudo cp output/bazel /usr/local/bin/bazel
Clean up:
rm bazel-3.1.0-dist.zip
sudo rm -rf bazel
Protobuf Installation: build from source
Download Protobuf with proper version → check the version in path_to_tensorflow/tensorflow/workspace.bzl build from source:
./autogen.sh
./configure
make
make check
sudo make install
sudo ldconfig
Verify the installation
protoc --version
Configuration
Download TensorFlow 2.3
Get libs from Github
git clone https://github.com/tensorflow/tensorflow.git
and switch to the released branch r.2.3
Change the directory to the folder you just downloaded using wget or git clone:
cd tensorflow
Configure the environment
Run the command below
./configure
Configuration started:
Specify the version of python by entering: /usr/bin/python3.6
Specify the path to python libraries: (the default path is correct, no need to enter anything, just press Enter)
Do you wish to build TensorFlow with OpenCL SYCL support? N
Do you wish to build TensorFlow with ROCm support? N
Do you wish to build TensorFlow with CUDA support? Y
Do you wish to build TensorFlow with TensorRT support? Y
Check if CUDA, CuDNN, TensorRT are well found automatically
Specify the compute capabilities. 7.2 → Find the proper capabilities for your device here: https://developer.nvidia.com/cuda-gpus)
Do you want to use clang as CUDA compiler? N
Specify GCC path: (the default one is correct, no need to enter anything, just press Enter)
Specify optimization flags to use during compilation: (default options are fine, no need to enter anything, just press Enter)
Would you like to interactively configure ./WORKSPACE for Android builds? N
Build TensorFlow C++ API from scratch:
Run the command and wait for the building process completed:
sudo bazel build --config=cuda --config=noaws -c opt //tensorflow:libetensorflow_cc //tensorflow:install_headers
Copy .so files under bazel-bin/tensorflow to /usr/local/lib
Copy the folder ./tensorflow to /usr/local/include/
Potential Errors
Error relative to Abseil:
fatal error: absl/strings/string_view.h: no such file or directory
Solution: install abseil and claim the folder path in CMakeLists.txt
Error relative to Eigen3:
fatal error: unsupported/Eigen/CXX11/Tensor: no such file or directory
Solution: install eigen (check version in workspace.bzl) and claim the folder path to in CMakeLists.txt
Error relative to Protobuf:
fatal error: tensorflow/core/framework/types.pb.h: no such file or directory
Solution: add ‘include_directories(${TENSORFLOW_DIR}/bazel-bin’ in CMakeLists.txt
Contact
Concerning questions regarding to deployment process, please contact Yafei Gao (yafei.gao@gmx.de)