Skip to content

Instantly share code, notes, and snippets.

@jorgemf
Last active November 27, 2018 07:25
Show Gist options
  • Save jorgemf/0f2025a45e1568663f4c20551a5881f1 to your computer and use it in GitHub Desktop.
Save jorgemf/0f2025a45e1568663f4c20551a5881f1 to your computer and use it in GitHub Desktop.
Compile TensorFlow Serving with CUDA support (October 2017) needs bazel 5.3 , cuda 8 , cudnn 7 # UNSUPPORTED -> https://gist.github.com/jorgemf/c791841f769bff96718fd54bbdecfd4e
#!/bin/bash
TENSORFLOW_COMMIT=9e76bf324f6bac63137a02bb6e6ec9120703ea9b # August 16, 2017
TENSORFLOW_SERVING_COMMIT=267d682bf43df1c8e87332d3712c411baf162fe9 # August 18, 2017
MODELS_COMMIT=78007443138108abf5170b296b4d703b49454487 # July 25, 2017
if [ -z $TENSORFLOW_SERVING_REPO_PATH ]; then
TENSORFLOW_SERVING_REPO_PATH="serving"
fi
INITIAL_PATH=$(pwd)
export TF_NEED_CUDA=1
export TF_NEED_GCP=1
export TF_NEED_JEMALLOC=1
export TF_NEED_HDFS=0
export TF_NEED_OPENCL=0
export TF_NEED_MKL=0
export TF_NEED_VERBS=0
export TF_NEED_MPI=0
export TF_NEED_GDR=0
export TF_ENABLE_XLA=0
export TF_CUDA_VERSION=8.0
export TF_CUDNN_VERSION=7
export TF_CUDA_CLANG=0
export TF_CUDA_COMPUTE_CAPABILITIES="3.5,5.2,6.1"
CUDA_PATH="/usr/local/cuda"
if [ ! -d $CUDA_PATH ]; then
CUDA_PATH="/opt/cuda"
fi
export CUDA_TOOLKIT_PATH=$CUDA_PATH
export CUDNN_INSTALL_PATH=$CUDA_PATH
export GCC_HOST_COMPILER_PATH=$(which gcc-5 || which gcc)
export PYTHON_BIN_PATH=$(which python)
export CC_OPT_FLAGS="-march=native"
function python_path {
"$PYTHON_BIN_PATH" - <<END
from __future__ import print_function
import site
import os
try:
input = raw_input
except NameError:
pass
python_paths = []
if os.getenv('PYTHONPATH') is not None:
python_paths = os.getenv('PYTHONPATH').split(':')
try:
library_paths = site.getsitepackages()
except AttributeError:
from distutils.sysconfig import get_python_lib
library_paths = [get_python_lib()]
all_paths = set(python_paths + library_paths)
paths = []
for path in all_paths:
if os.path.isdir(path):
paths.append(path)
if len(paths) == 1:
print(paths[0])
else:
ret_paths = ",".join(paths)
print(ret_paths)
END
}
export PYTHON_LIB_PATH=$(python_path)
cd $TENSORFLOW_SERVING_REPO_PATH
cd tensorflow
git reset --hard
git fetch
cd ../tf_models
git reset --hard
git fetch
cd ..
git reset --hard
git fetch
git submodule update --init --recursive
git checkout $TENSORFLOW_SERVING_COMMIT
git submodule update --init --recursive
cd tf_models
git checkout $MODELS_COMMIT
cd ../tensorflow
git submodule update --init --recursive
git checkout $TENSORFLOW_COMMIT
git submodule update --init --recursive
wget -O /tmp/0002-TF-1.3-CUDA-9.0-and-cuDNN-7.0-support.patch https://github.com/tensorflow/tensorflow/files/1253794/0002-TF-1.3-CUDA-9.0-and-cuDNN-7.0-support.patch.txt
git apply /tmp/0002-TF-1.3-CUDA-9.0-and-cuDNN-7.0-support.patch
./configure
cd ..
# force to use gcc-5 to compile CUDA
if [ -e $(which gcc-5) ]; then
sed -i.bak 's/"gcc"/"gcc-5"/g' tensorflow/third_party/gpus/cuda_configure.bzl
fi
## Error fix. Ref: https://github.com/tensorflow/serving/issues/327
sed -i.bak 's/external\/nccl_archive\///g' tensorflow/tensorflow/contrib/nccl/kernels/nccl_manager.h
sed -i.bak 's/external\/nccl_archive\///g' tensorflow/tensorflow/contrib/nccl/kernels/nccl_ops.cc
## Error fix. Ref: https://github.com/tensorflow/tensorflow/issues/12979
sed -i '\@https://github.com/google/protobuf/archive/0b059a3d8a8f8aa40dde7bea55edca4ec5dfea66.tar.gz@d' tensorflow/tensorflow/workspace.bzl
cd ..
bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.2 --config=cuda -k --jobs 6 --verbose_failures //tensorflow_serving/model_servers:tensorflow_model_server
# build fails, apply eigen patch
wget -O /tmp/eigen.f3a22f35b044.cuda9.diff https://storage.googleapis.com/tf-performance/public/cuda9rc_patch/eigen.f3a22f35b044.cuda9.diff
cd -P bazel-out/../../../external/eigen_archive
patch -p1 < /tmp/eigen.f3a22f35b044.cuda9.diff
cd -
wget -O /tmp/eigen.f3a22f35b044.cuda9.diff https://storage.googleapis.com/tf-performance/public/cuda9rc_patch/eigen.f3a22f35b044.cuda9.diff
# build again
bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.2 --config=cuda -k --jobs 6 --verbose_failures //tensorflow_serving/model_servers:tensorflow_model_server
cd $INITIAL_PATH
@sugartom
Copy link

@amirj You might try the solution as mentioned here:
tensorflow/serving#327 (comment)

@jorgemf
Copy link
Author

jorgemf commented Jul 17, 2017

Scrip updated. Sorry guys, I didn't see the comments before.

@protossw512
Copy link

protossw512 commented Aug 16, 2017

I tried but still get the same error, do you have any idea?
I put this script at ~/
And my serving is located at ~/

it returns

The 'build' command is only supported from within a workspace.

Then I modified your script from:

cd ..
bazel build -c opt --config=cuda --verbose_failures @tf_serving//tensorflow_serving/model_servers:tensorflow_model_server

to:

bazel build -c opt --config=cuda tensorflow_serving/...

And got the same error

ERROR: no such target '@org_tensorflow//third_party/gpus/crosstool:crosstool': target 'crosstool' not declared in package 'third_party/gpus/crosstool' defined by /home/xinyao/.cache/bazel/_bazel_xinyao/80e74adbbcccd24d2cb855aa9a507307/external/org_tensorflow/third_party/gpus/crosstool/BUILD.
INFO: Elapsed time: 0.255s

@jorgemf
Copy link
Author

jorgemf commented Oct 7, 2017

Script updated. It needs cdnn 7 and cuda 8.
@protossw512 Sorry I do not receive notifications when someone writes a comment here. You are setting wrong the variable TENSORFLOW_SERVING_REPO_PATH or the script might be wrong, the one I test is a bit different.

@riverhxz
Copy link

@jorgemf does cudnn 7 and cuda 8 work?

@jorgemf
Copy link
Author

jorgemf commented Dec 21, 2017

I don't support this anymore. I have created a Dockerfile to compile it, so all dependencies are there and probably people will have less issues compiling it: https://gist.github.com/jorgemf/c791841f769bff96718fd54bbdecfd4e

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment