This test is performed in a docker container, but all steps should adapt to any environment where TensorFlow is properly installed. Please also see the additional files below this readme.
mkdir test_files
cd test_files
curl -L https://gist.github.com/riga/f9a18023d9f7fb647d74daa9744bb978/download -o gist.zip
unzip -j gist.zip && rm gist.zip
docker run -ti -v $PWD:/test_files tensorflow/tensorflow:2.8.0
apt-get -y update
apt-get -y install nano cmake wget
Note: There is one file missing in the bundled source files and we are going to fetch it manually. Otherwise, when compiling custom code that uses our AOT compiled model down the road we would see undefined references to xla::CustomCallStatusGetMessage
in libtf_xla_runtime.a
during linking. The missing file adds this exact symbol to the xla_aot_runtime
library.
# remember the TF install path
export TF_INSTALL_PATH="/usr/local/lib/python3.8/dist-packages/tensorflow"
cd "${TF_INSTALL_PATH}/xla_aot_runtime_src"
# download the missing file
( cd tensorflow/compiler/xla/service && wget https://raw.githubusercontent.com/tensorflow/tensorflow/v2.8.0/tensorflow/compiler/xla/service/custom_call_status.cc )
# compile and create the static library libtf_xla_runtime.a
cmake .
make -j
cd /test_files
TF_XLA_FLAGS="--tf_xla_auto_jit=2 --tf_xla_cpu_global_jit" python create_model.py
saved_model_cli aot_compile_cpu \
--dir my_model \
--tag_set serve \
--signature_def_key default \
--output_prefix my_model \
--cpp_class MyModel
This should have created my_model.h
, my_model.o
, my_model_makefile.inc
and my_model_metadata.o
.
Note: To compile for architecturs other than the default (x86_64
), add a LLVM-style --target_triplet
.
Examples can be found here.
g++ \
-D_GLIBCXX_USE_CXX11_ABI=0 \
-I${TF_INSTALL_PATH}/include \
-L${TF_INSTALL_PATH}/xla_aot_runtime_src \
test_model.cc my_model.o \
-o test_model \
-lpthread -ltf_xla_runtime
or
make
./test_model
You should see result: [20, 25]
.