Skip to content

Instantly share code, notes, and snippets.

@taylorpaul
Last active January 8, 2025 20:47

Revisions

  1. taylorpaul revised this gist Nov 18, 2016. 1 changed file with 0 additions and 2 deletions.
    2 changes: 0 additions & 2 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -66,8 +66,6 @@ As described in the error link above, search for ctx.action and add `env=ctx.con
    )
    ```

    **Note:** protbuf.bzl file from `~/.cache/bazel/_bazel_YOURUSERNAME/YOURHASH(i.e. f81f1107f96c7515450fc43e0dbb6ed5)/external/protobuf/protobuf.bzl` posted below!

    You will then likely hit `error trying to exec 'as': execvp: No such file or directory`. Since I am a self-confessing linux noob, you have to use the few tricks you know as much as possible(I didn't follow **gbkedar's** 2nd comment):
    ```
    cp `which as` /opt/gcc/5.2.0/bin/as
  2. taylorpaul revised this gist Nov 7, 2016. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -3,9 +3,9 @@

    **GCC:** locally installed 5.2.0 (Cluster default is 4.4.7)

    **Bazel:** 0.3.2-2016-11-02 (@03afc7d)
    **Bazel:** 0.4.0-2016-11-06 (@fa407e5)

    **Tensorflow:** v0.11.0rc1
    **Tensorflow:** v0.11.0rc2

    **CUDA:** 8.0

  3. taylorpaul revised this gist Nov 6, 2016. 1 changed file with 19 additions and 1 deletion.
    20 changes: 19 additions & 1 deletion buildtf.sh
    Original file line number Diff line number Diff line change
    @@ -10,12 +10,14 @@
    # Ensure we can load CUDA drivers.
    module load cuda/8.0 || { echo 'Failed to load CUDA drivers. Are you not on a compute node?' ; exit 1; }

    #TODO: GCC_DIR if not standard system gcc (which gcc)
    #TODO: GCC_DIR/LOCAL_INCLUDE/LOCAL_LIBRARY if not standard system gcc (which gcc)
    STARTDIR=`pwd`/tf_tools
    GCC_DIR=/work/thpaul/gcc/5.2.0
    BAZEL_BIN_DIR=/work/thpaul/bin #/bin where to copy bazel binary
    PYTHON_INSTALL_DIR=python27
    JAVA_DIR=jdk1.8.0_102 #Directory you jdk.tar file extracts too (depends on which version you DL)
    LOCAL_INCLUDE=$STARTDIR/include
    LOCAL_LIBRARY=$STARTDIR/lib

    #VERSIONS
    PYTHON_VERSION=2.7.12
    @@ -28,6 +30,7 @@ https://www.python.org/ftp/python/2.7.12/Python-2.7.12.tgz
    wget https://www.python.org/ftp/python/$PYTHON_VERSION/Python-$PYTHON_VERSION.tgz
    wget --no-check-certificate https://pypi.python.org/packages/source/s/setuptools/setuptools-1.4.2.tar.gz -O setuptools-1.4.2.tar.gz
    wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u102-b14/jdk-8u102-linux-x64.tar.gz
    wget https://sqlite.org/2016/sqlite-autoconf-3150100.tar.gz #TODO: update if newer version needed (3.8.6)

    echo "Buidling Directories"
    mkdir -p $STARTDIR
    @@ -49,9 +52,23 @@ echo "Decompressing archives"
    tar zxvf ../Python-$PYTHON_VERSION.tgz
    tar --totals -xvf ../setuptools-1.4.2.tar.gz
    tar --totals -xvf ../$JAVA_FILE.tar.gz
    tar --totals -xvf ../sqlite-autoconf-3150100.tar.gz


    cd sqlite-autoconf-3150100
    SQLITE_INSTALL_DIR=`pwd`
    echo "Installing sqlite3 libs at `pwd`!"
    ./configure --enable-shared --prefix=$SQLITE_INSTALL_DIR
    make
    make install
    cp ./include/* $LOCAL_INCLUDE
    cp ./lib/* $LOCAL_LIBRARY
    cd ..

    cd Python-$PYTHON_VERSION
    echo "Installing python at $PYTHON_INSTALL_DIR"
    #TODO: Have to change setup.py to look in local include file for sqlite3 libraries
    sed -i 's#/usr/local/include/sqlite3#'$LOCAL_INCLUDE'#g' ./setup.py
    ./configure --enable-shared --prefix=$PYTHON_INSTALL_DIR --enable-loadable-sqlite-extensions #TODO: need sqlite3 for nltk and others
    make
    make altinstall
    @@ -64,6 +81,7 @@ export LD_LIBRARY_PATH=$PYTHON_INSTALL_DIR/lib:$LD_LIBRARY_PATH
    $PYTHON_INSTALL_DIR/bin/python2.7 setup.py install
    curl https://bootstrap.pypa.io/get-pip.py | $PYTHON_INSTALL_DIR/bin/python2.7 -
    pip install --no-cache-dir numpy
    pip install -U nltk
    cd ..

    cd $JAVA_DIR
  4. taylorpaul revised this gist Nov 6, 2016. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion buildtf.sh
    Original file line number Diff line number Diff line change
    @@ -52,7 +52,7 @@ tar --totals -xvf ../$JAVA_FILE.tar.gz

    cd Python-$PYTHON_VERSION
    echo "Installing python at $PYTHON_INSTALL_DIR"
    ./configure --enable-shared --prefix=$PYTHON_INSTALL_DIR
    ./configure --enable-shared --prefix=$PYTHON_INSTALL_DIR --enable-loadable-sqlite-extensions #TODO: need sqlite3 for nltk and others
    make
    make altinstall
    export PATH=$PYTHON_INSTALL_DIR/bin:$PATH
  5. taylorpaul revised this gist Nov 6, 2016. 4 changed files with 6 additions and 880 deletions.
    248 changes: 0 additions & 248 deletions CROSSTOOL.tpl
    Original file line number Diff line number Diff line change
    @@ -1,248 +0,0 @@
    major_version: "local"
    minor_version: ""
    default_target_cpu: "same_as_host"

    default_toolchain {
    cpu: "k8"
    toolchain_identifier: "local_linux"
    }
    default_toolchain {
    cpu: "piii"
    toolchain_identifier: "local_linux"
    }
    default_toolchain {
    cpu: "arm"
    toolchain_identifier: "local_linux"
    }
    default_toolchain {
    cpu: "darwin"
    toolchain_identifier: "local_darwin"
    }
    default_toolchain {
    cpu: "ppc"
    toolchain_identifier: "local_linux"
    }

    toolchain {
    abi_version: "local"
    abi_libc_version: "local"
    builtin_sysroot: ""
    compiler: "compiler"
    host_system_name: "local"
    needsPic: true
    supports_gold_linker: false
    supports_incremental_linker: false
    supports_fission: false
    supports_interface_shared_objects: false
    supports_normalizing_ar: false
    supports_start_end_lib: false
    supports_thin_archives: false
    target_libc: "local"
    target_cpu: "local"
    target_system_name: "local"
    toolchain_identifier: "local_linux"
    tool_path { name: "ar" path: "/work/thpaul/gcc/5.2.0/bin/ar" }
    tool_path { name: "compat-ld" path: "/work/thpaul/gcc/5.2.0/bin/ld" }
    tool_path { name: "cpp" path: "/work/thpaul/gcc/5.2.0/bin/cpp" }
    tool_path { name: "dwp" path: "/usr/bin/dwp" }
    # As part of the TensorFlow release, we place some cuda-related compilation
    # files in @local_config_cuda//crosstool/clang/bin, and this relative
    # path, combined with the rest of our Bazel configuration causes our
    # compilation to use those files.
    tool_path { name: "gcc" path: "clang/bin/crosstool_wrapper_driver_is_not_gcc" }
    # Use "-std=c++11" for nvcc. For consistency, force both the host compiler
    # and the device compiler to use "-std=c++11".
    cxx_flag: "-std=c++11"
    linker_flag: "-lstdc++"
    linker_flag: "-B/work/thpaul/gcc/5.2.0/bin" # /opt/gcc/5.2.0/bin

    %{gcc_host_compiler_includes}
    tool_path { name: "gcov" path: "/work/thpaul/gcc/5.2.0/bin/gcov" }

    # C(++) compiles invoke the compiler (as that is the one knowing where
    # to find libraries), but we provide LD so other rules can invoke the linker.
    tool_path { name: "ld" path: "/work/thpaul/gcc/5.2.0/bin/ld" }

    tool_path { name: "nm" path: "/work/thpaul/gcc/5.2.0/bin/nm" }
    tool_path { name: "objcopy" path: "/work/thpaul/gcc/5.2.0/bin/objcopy" }
    objcopy_embed_flag: "-I"
    objcopy_embed_flag: "binary"
    tool_path { name: "objdump" path: "/work/thpaul/gcc/5.2.0/bin/objdump" }
    tool_path { name: "strip" path: "/work/thpaul/gcc/5.2.0/bin/strip" }

    # Anticipated future default.
    unfiltered_cxx_flag: "-no-canonical-prefixes"

    # Make C++ compilation deterministic. Use linkstamping instead of these
    # compiler symbols.
    unfiltered_cxx_flag: "-Wno-builtin-macro-redefined"
    unfiltered_cxx_flag: "-D__DATE__=\"redacted\""
    unfiltered_cxx_flag: "-D__TIMESTAMP__=\"redacted\""
    unfiltered_cxx_flag: "-D__TIME__=\"redacted\""

    # Security hardening on by default.
    # Conservative choice; -D_FORTIFY_SOURCE=2 may be unsafe in some cases.
    # We need to undef it before redefining it as some distributions now have
    # it enabled by default.
    compiler_flag: "-U_FORTIFY_SOURCE"
    compiler_flag: "-D_FORTIFY_SOURCE=1"
    compiler_flag: "-fstack-protector"
    compiler_flag: "-fPIE"
    linker_flag: "-pie"
    linker_flag: "-Wl,-z,relro,-z,now"

    # Enable coloring even if there's no attached terminal. Bazel removes the
    # escape sequences if --nocolor is specified. This isn't supported by gcc
    # on Ubuntu 14.04.
    # compiler_flag: "-fcolor-diagnostics"

    # All warnings are enabled. Maybe enable -Werror as well?
    compiler_flag: "-Wall"
    # Enable a few more warnings that aren't part of -Wall.
    compiler_flag: "-Wunused-but-set-parameter"
    # But disable some that are problematic.
    compiler_flag: "-Wno-free-nonheap-object" # has false positives

    # Keep stack frames for debugging, even in opt mode.
    compiler_flag: "-fno-omit-frame-pointer"

    # Anticipated future default.
    linker_flag: "-no-canonical-prefixes"
    unfiltered_cxx_flag: "-fno-canonical-system-headers"
    # Have gcc return the exit code from ld.
    linker_flag: "-pass-exit-codes"
    # Stamp the binary with a unique identifier.
    linker_flag: "-Wl,--build-id=md5"
    linker_flag: "-Wl,--hash-style=gnu"
    # Gold linker only? Can we enable this by default?
    # linker_flag: "-Wl,--warn-execstack"
    # linker_flag: "-Wl,--detect-odr-violations"

    # Include directory for cuda headers.
    cxx_builtin_include_directory: "%{cuda_include_path}"

    compilation_mode_flags {
    mode: DBG
    # Enable debug symbols.
    compiler_flag: "-g"
    }
    compilation_mode_flags {
    mode: OPT
    # No debug symbols.
    # Maybe we should enable https://gcc.gnu.org/wiki/DebugFission for opt or
    # even generally? However, that can't happen here, as it requires special
    # handling in Bazel.
    compiler_flag: "-g0"
    # Conservative choice for -O
    # -O3 can increase binary size and even slow down the resulting binaries.
    # Profile first and / or use FDO if you need better performance than this.
    compiler_flag: "-O2"
    # Disable assertions
    compiler_flag: "-DNDEBUG"
    # Removal of unused code and data at link time (can this increase binary size in some cases?).
    compiler_flag: "-ffunction-sections"
    compiler_flag: "-fdata-sections"
    linker_flag: "-Wl,--gc-sections"
    }
    linking_mode_flags { mode: DYNAMIC }
    }
    toolchain {
    abi_version: "local"
    abi_libc_version: "local"
    builtin_sysroot: ""
    compiler: "compiler"
    host_system_name: "local"
    needsPic: true
    target_libc: "macosx"
    target_cpu: "darwin"
    target_system_name: "local"
    toolchain_identifier: "local_darwin"
    tool_path { name: "ar" path: "/usr/bin/libtool" }
    tool_path { name: "compat-ld" path: "/usr/bin/ld" }
    tool_path { name: "cpp" path: "/usr/bin/cpp" }
    tool_path { name: "dwp" path: "/usr/bin/dwp" }
    tool_path { name: "gcc" path: "clang/bin/crosstool_wrapper_driver_is_not_gcc" }
    cxx_flag: "-std=c++11"
    ar_flag: "-static"
    ar_flag: "-s"
    ar_flag: "-o"
    linker_flag: "-lc++"
    linker_flag: "-undefined"
    linker_flag: "dynamic_lookup"
    # TODO(ulfjack): This is wrong on so many levels. Figure out a way to auto-detect the proper
    # setting from the local compiler, and also how to make incremental builds correct.
    cxx_builtin_include_directory: "/"
    tool_path { name: "gcov" path: "/usr/bin/gcov" }
    tool_path { name: "ld" path: "/usr/bin/ld" }
    tool_path { name: "nm" path: "/usr/bin/nm" }
    tool_path { name: "objcopy" path: "/usr/bin/objcopy" }
    objcopy_embed_flag: "-I"
    objcopy_embed_flag: "binary"
    tool_path { name: "objdump" path: "/usr/bin/objdump" }
    tool_path { name: "strip" path: "/usr/bin/strip" }
    # Anticipated future default.
    unfiltered_cxx_flag: "-no-canonical-prefixes"
    # Make C++ compilation deterministic. Use linkstamping instead of these
    # compiler symbols.
    unfiltered_cxx_flag: "-Wno-builtin-macro-redefined"
    unfiltered_cxx_flag: "-D__DATE__=\"redacted\""
    unfiltered_cxx_flag: "-D__TIMESTAMP__=\"redacted\""
    unfiltered_cxx_flag: "-D__TIME__=\"redacted\""
    # Security hardening on by default.
    # Conservative choice; -D_FORTIFY_SOURCE=2 may be unsafe in some cases.
    compiler_flag: "-D_FORTIFY_SOURCE=1"
    compiler_flag: "-fstack-protector"
    # Enable coloring even if there's no attached terminal. Bazel removes the
    # escape sequences if --nocolor is specified.
    compiler_flag: "-fcolor-diagnostics"
    # All warnings are enabled. Maybe enable -Werror as well?
    compiler_flag: "-Wall"
    # Enable a few more warnings that aren't part of -Wall.
    compiler_flag: "-Wthread-safety"
    compiler_flag: "-Wself-assign"
    # Keep stack frames for debugging, even in opt mode.
    compiler_flag: "-fno-omit-frame-pointer"
    # Anticipated future default.
    linker_flag: "-no-canonical-prefixes"
    # Include directory for cuda headers.
    cxx_builtin_include_directory: "%{cuda_include_path}"
    compilation_mode_flags {
    mode: DBG
    # Enable debug symbols.
    compiler_flag: "-g"
    }
    compilation_mode_flags {
    mode: OPT
    # No debug symbols.
    # Maybe we should enable https://gcc.gnu.org/wiki/DebugFission for opt or even generally?
    # However, that can't happen here, as it requires special handling in Bazel.
    compiler_flag: "-g0"
    # Conservative choice for -O
    # -O3 can increase binary size and even slow down the resulting binaries.
    # Profile first and / or use FDO if you need better performance than this.
    compiler_flag: "-O2"
    # Disable assertions
    compiler_flag: "-DNDEBUG"
    # Removal of unused code and data at link time (can this increase binary size in some cases?).
    compiler_flag: "-ffunction-sections"
    compiler_flag: "-fdata-sections"
    }
    }
    12 changes: 6 additions & 6 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -12,6 +12,7 @@
    **CUDNN:** 5.1.5

    ### Steps:
    You should be able to modify the script (buildtf.sh) below to do these steps automatically, but I list out details here as well.

    #### Installing Java Locally:
    [Follow this Tutorial](http://tecadmin.net/install-java-8-on-centos-rhel-and-fedora/#) or download prefered version of JDK 8.0 and set proper environment variables as described in the tutorial.
    @@ -29,11 +30,11 @@ that are stored in `/usr/bin` here is the work around I used (it isn't pretty an
    cp `which ld` /opt/gcc/5.2.0/bin/ld (repeat for any command listed in the crosstools that doesn't already reside in your gcc /bin directory)
    ```

    **Note2:** I downloaded a newer release of bazel and tensorflow as noted above and there are fewer changes required in the latest versions of the crosstool than described in the tutorial, I posted my final CROSSTOOL versions from these directories below:
    **Note2:** I downloaded a newer release of bazel and tensorflow as noted above and there are fewer changes required in the latest versions of the crosstool then described in the tutorial.

    1) /tensorflow/third_party/gpus/crosstool/
    1) modify /tensorflow/third_party/gpus/crosstool/CROSSTOOL.tpl as described in tutorial above

    2) /tensorflow/third_party/gpus/crosstool/clang/bin/
    2) modify /tensorflow/third_party/gpus/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc.tpl as described in tutorial above. I did not change the first line: #!/usr/bin/env python (but the tutorial does!)

    Again these steps led to the below error which took me forever to get past:

    @@ -45,7 +46,7 @@ As described in **gbkedar's** comment from Jul 12. You have to find this file:

    `$INSTALL_PATH/tensorflow/bazel-tensorflow/external/protobuf/protobuf.bzl`

    But, until the compile fails this file is harder to find. The failure creates the shortcut in the `/tensorflow` directory after failure. I was running into issues re-attempting the compile and had to run `./configure` almost everytime. Therefore, I had to find this file before the first failure of my compile attempt. The file should be located somewhere similar to this after running `./configure` from the `/tensorflow` directory:
    But, until the compile fails this file is harder to find. (The buildtf.sh re-runs the compile after modifying the file after the first failure). The failure creates the shortcut in the `/tensorflow` directory. I was running into issues re-attempting the compile and had to run `./configure` almost everytime. Therefore, I had to find this file before the first failure of my compile attempt. The file should be located somewhere similar to this after running `./configure` from the `/tensorflow` directory:

    `~/.cache/bazel/_bazel_YOURUSERNAME/YOURHASH(i.e. f81f1107f96c7515450fc43e0dbb6ed5)/external/protobuf/protobuf.bzl`

    @@ -87,7 +88,6 @@ and then added that directory to my $PYTHONPATH variable:

    `export PYTHONPATH=/home/thpaul/python27-packages/:$PYTHONPATH`

    Re-running the command builds the proper .whl file which you can install via pip. I did this using `virtualenv` as described [here.](https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html#virtualenv-installation)
    But pointing pip to the created .whl file instead.
    Re-running the command builds the proper .whl file which you can install via pip.

    Hope this helps anyone trying to compile tensorflow from source!
    250 changes: 0 additions & 250 deletions crosstool_wrapper_driver_is_not_gcc.tpl
    Original file line number Diff line number Diff line change
    @@ -1,250 +0,0 @@
    #!/usr/bin/env python
    # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    # http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    # ==============================================================================

    """Crosstool wrapper for compiling CUDA programs.
    SYNOPSIS:
    crosstool_wrapper_is_not_gcc [options passed in by cc_library()
    or cc_binary() rule]
    DESCRIPTION:
    This script is expected to be called by the cc_library() or cc_binary() bazel
    rules. When the option "-x cuda" is present in the list of arguments passed
    to this script, it invokes the nvcc CUDA compiler. Most arguments are passed
    as is as a string to --compiler-options of nvcc. When "-x cuda" is not
    present, this wrapper invokes hybrid_driver_is_not_gcc with the input
    arguments as is.
    NOTES:
    Changes to the contents of this file must be propagated from
    //third_party/gpus/crosstool/crosstool_wrapper_is_not_gcc to
    //third_party/gpus/crosstool/v*/*/clang/bin/crosstool_wrapper_is_not_gcc
    """

    from __future__ import print_function

    __author__ = 'keveman@google.com (Manjunath Kudlur)'

    from argparse import ArgumentParser
    import os
    import subprocess
    import re
    import sys
    import pipes

    # Template values set by cuda_autoconf.
    CPU_COMPILER = ('%{cpu_compiler}')
    GCC_HOST_COMPILER_PATH = ('%{gcc_host_compiler_path}')

    CURRENT_DIR = os.path.dirname(sys.argv[0])
    NVCC_PATH = CURRENT_DIR + '/../../../cuda/bin/nvcc'
    LLVM_HOST_COMPILER_PATH = ('/work/thpaul/gcc/5.2.0/bin/gcc')
    PREFIX_DIR = os.path.dirname(GCC_HOST_COMPILER_PATH)

    def Log(s):
    print('gpus/crosstool: {0}'.format(s))


    def GetOptionValue(argv, option):
    """Extract the list of values for option from the argv list.
    Args:
    argv: A list of strings, possibly the argv passed to main().
    option: The option whose value to extract, without the leading '-'.
    Returns:
    A list of values, either directly following the option,
    (eg., -opt val1 val2) or values collected from multiple occurrences of
    the option (eg., -opt val1 -opt val2).
    """

    parser = ArgumentParser()
    parser.add_argument('-' + option, nargs='*', action='append')
    args, _ = parser.parse_known_args(argv)
    if not args or not vars(args)[option]:
    return []
    else:
    return sum(vars(args)[option], [])


    def GetHostCompilerOptions(argv):
    """Collect the -isystem, -iquote, and --sysroot option values from argv.
    Args:
    argv: A list of strings, possibly the argv passed to main().
    Returns:
    The string that can be used as the --compiler-options to nvcc.
    """

    parser = ArgumentParser()
    parser.add_argument('-isystem', nargs='*', action='append')
    parser.add_argument('-iquote', nargs='*', action='append')
    parser.add_argument('--sysroot', nargs=1)
    parser.add_argument('-g', nargs='*', action='append')
    parser.add_argument('-fno-canonical-system-headers', action='store_true')

    args, _ = parser.parse_known_args(argv)

    opts = ''

    if args.isystem:
    opts += ' -isystem ' + ' -isystem '.join(sum(args.isystem, []))
    if args.iquote:
    opts += ' -iquote ' + ' -iquote '.join(sum(args.iquote, []))
    if args.g:
    opts += ' -g' + ' -g'.join(sum(args.g, []))
    if args.fno_canonical_system_headers:
    opts += ' -fno-canonical-system-headers'
    if args.sysroot:
    opts += ' --sysroot ' + args.sysroot[0]

    return opts

    def GetNvccOptions(argv):
    """Collect the -nvcc_options values from argv.
    Args:
    argv: A list of strings, possibly the argv passed to main().
    Returns:
    The string that can be passed directly to nvcc.
    """

    parser = ArgumentParser()
    parser.add_argument('-nvcc_options', nargs='*', action='append')

    args, _ = parser.parse_known_args(argv)

    if args.nvcc_options:
    return ' '.join(['--'+a for a in sum(args.nvcc_options, [])])
    return ''


    def InvokeNvcc(argv, log=False):
    """Call nvcc with arguments assembled from argv.
    Args:
    argv: A list of strings, possibly the argv passed to main().
    log: True if logging is requested.
    Returns:
    The return value of calling os.system('nvcc ' + args)
    """

    host_compiler_options = GetHostCompilerOptions(argv)
    nvcc_compiler_options = GetNvccOptions(argv)
    opt_option = GetOptionValue(argv, 'O')
    m_options = GetOptionValue(argv, 'm')
    m_options = ''.join([' -m' + m for m in m_options if m in ['32', '64']])
    include_options = GetOptionValue(argv, 'I')
    out_file = GetOptionValue(argv, 'o')
    depfiles = GetOptionValue(argv, 'MF')
    defines = GetOptionValue(argv, 'D')
    defines = ''.join([' -D' + define for define in defines])
    undefines = GetOptionValue(argv, 'U')
    undefines = ''.join([' -U' + define for define in undefines])
    std_options = GetOptionValue(argv, 'std')
    # currently only c++11 is supported by Cuda 7.0 std argument
    nvcc_allowed_std_options = ["c++11"]
    std_options = ''.join([' -std=' + define
    for define in std_options if define in nvcc_allowed_std_options])

    # The list of source files get passed after the -c option. I don't know of
    # any other reliable way to just get the list of source files to be compiled.
    src_files = GetOptionValue(argv, 'c')

    if len(src_files) == 0:
    return 1
    if len(out_file) != 1:
    return 1

    opt = (' -O2' if (len(opt_option) > 0 and int(opt_option[0]) > 0)
    else ' -g -G')

    includes = (' -I ' + ' -I '.join(include_options)
    if len(include_options) > 0
    else '')

    # Unfortunately, there are other options that have -c prefix too.
    # So allowing only those look like C/C++ files.
    src_files = [f for f in src_files if
    re.search('\.cpp$|\.cc$|\.c$|\.cxx$|\.C$', f)]
    srcs = ' '.join(src_files)
    out = ' -o ' + out_file[0]

    supported_cuda_compute_capabilities = [ %{cuda_compute_capabilities} ]
    nvccopts = '-D_FORCE_INLINES '
    for capability in supported_cuda_compute_capabilities:
    capability = capability.replace('.', '')
    nvccopts += r'-gencode=arch=compute_%s,\"code=sm_%s,compute_%s\" ' % (
    capability, capability, capability)
    nvccopts += ' ' + nvcc_compiler_options
    nvccopts += undefines
    nvccopts += defines
    nvccopts += std_options
    nvccopts += m_options

    if depfiles:
    # Generate the dependency file
    depfile = depfiles[0]
    cmd = (NVCC_PATH + ' ' + nvccopts +
    ' --compiler-options "' + host_compiler_options + '"' +
    ' --compiler-bindir=' + GCC_HOST_COMPILER_PATH +
    ' -I .' +
    ' -x cu ' + includes + ' ' + srcs + ' -M -o ' + depfile)
    if log: Log(cmd)
    exit_status = os.system(cmd)
    if exit_status != 0:
    return exit_status

    cmd = (NVCC_PATH + ' ' + nvccopts +
    ' --compiler-options "' + host_compiler_options + ' -fPIC"' +
    ' --compiler-bindir=' + GCC_HOST_COMPILER_PATH +
    ' -I .' +
    ' -x cu ' + opt + includes + ' -c ' + srcs + out)

    # TODO(zhengxq): for some reason, 'gcc' needs this help to find 'as'.
    # Need to investigate and fix.
    cmd = 'PATH=' + PREFIX_DIR + ' ' + cmd
    if log: Log(cmd)
    return os.system(cmd)


    def main():
    parser = ArgumentParser()
    parser.add_argument('-x', nargs=1)
    parser.add_argument('--cuda_log', action='store_true')
    args, leftover = parser.parse_known_args(sys.argv[1:])

    if args.x and args.x[0] == 'cuda':
    if args.cuda_log: Log('-x cuda')
    leftover = [pipes.quote(s) for s in leftover]
    if args.cuda_log: Log('using nvcc')
    return InvokeNvcc(leftover, log=args.cuda_log)

    # Strip our flags before passing through to the CPU compiler for files which
    # are not -x cuda. We can't just pass 'leftover' because it also strips -x.
    # We not only want to pass -x to the CPU compiler, but also keep it in its
    # relative location in the argv list (the compiler is actually sensitive to
    # this).
    cpu_compiler_flags = [flag for flag in sys.argv[1:]
    if not flag.startswith(('--cuda_log'))]

    return subprocess.call([CPU_COMPILER] + cpu_compiler_flags)

    if __name__ == '__main__':
    sys.exit(main())
    376 changes: 0 additions & 376 deletions protobuf.bzl
    Original file line number Diff line number Diff line change
    @@ -1,376 +0,0 @@
    # -*- mode: python; -*- PYTHON-PREPROCESSING-REQUIRED

    def _GetPath(ctx, path):
    if ctx.label.workspace_root:
    return ctx.label.workspace_root + '/' + path
    else:
    return path

    def _GenDir(ctx):
    if not ctx.attr.includes:
    return ctx.label.workspace_root
    if not ctx.attr.includes[0]:
    return _GetPath(ctx, ctx.label.package)
    if not ctx.label.package:
    return _GetPath(ctx, ctx.attr.includes[0])
    return _GetPath(ctx, ctx.label.package + '/' + ctx.attr.includes[0])

    def _CcHdrs(srcs, use_grpc_plugin=False):
    ret = [s[:-len(".proto")] + ".pb.h" for s in srcs]
    if use_grpc_plugin:
    ret += [s[:-len(".proto")] + ".grpc.pb.h" for s in srcs]
    return ret

    def _CcSrcs(srcs, use_grpc_plugin=False):
    ret = [s[:-len(".proto")] + ".pb.cc" for s in srcs]
    if use_grpc_plugin:
    ret += [s[:-len(".proto")] + ".grpc.pb.cc" for s in srcs]
    return ret

    def _CcOuts(srcs, use_grpc_plugin=False):
    return _CcHdrs(srcs, use_grpc_plugin) + _CcSrcs(srcs, use_grpc_plugin)

    def _PyOuts(srcs):
    return [s[:-len(".proto")] + "_pb2.py" for s in srcs]

    def _RelativeOutputPath(path, include, dest=""):
    if include == None:
    return path

    if not path.startswith(include):
    fail("Include path %s isn't part of the path %s." % (include, path))

    if include and include[-1] != '/':
    include = include + '/'
    if dest and dest[-1] != '/':
    dest = dest + '/'

    path = path[len(include):]
    return dest + path

    def _proto_gen_impl(ctx):
    """General implementation for generating protos"""
    srcs = ctx.files.srcs
    deps = []
    deps += ctx.files.srcs
    gen_dir = _GenDir(ctx)
    if gen_dir:
    import_flags = ["-I" + gen_dir, "-I" + ctx.var["GENDIR"] + "/" + gen_dir]
    else:
    import_flags = ["-I."]

    for dep in ctx.attr.deps:
    import_flags += dep.proto.import_flags
    deps += dep.proto.deps

    args = []
    if ctx.attr.gen_cc:
    args += ["--cpp_out=" + ctx.var["GENDIR"] + "/" + gen_dir]
    if ctx.attr.gen_py:
    args += ["--python_out=" + ctx.var["GENDIR"] + "/" + gen_dir]

    inputs = srcs + deps
    if ctx.executable.plugin:
    plugin = ctx.executable.plugin
    lang = ctx.attr.plugin_language
    if not lang and plugin.basename.startswith('protoc-gen-'):
    lang = plugin.basename[len('protoc-gen-'):]
    if not lang:
    fail("cannot infer the target language of plugin", "plugin_language")

    outdir = ctx.var["GENDIR"] + "/" + gen_dir
    if ctx.attr.plugin_options:
    outdir = ",".join(ctx.attr.plugin_options) + ":" + outdir
    args += ["--plugin=protoc-gen-%s=%s" % (lang, plugin.path)]
    args += ["--%s_out=%s" % (lang, outdir)]
    inputs += [plugin]

    if args:
    ctx.action(
    inputs=inputs,
    outputs=ctx.outputs.outs,
    arguments=args + import_flags + [s.path for s in srcs],
    executable=ctx.executable.protoc,
    mnemonic="ProtoCompile",
    env=ctx.configuration.default_shell_env,
    )

    return struct(
    proto=struct(
    srcs=srcs,
    import_flags=import_flags,
    deps=deps,
    ),
    )

    proto_gen = rule(
    attrs = {
    "srcs": attr.label_list(allow_files = True),
    "deps": attr.label_list(providers = ["proto"]),
    "includes": attr.string_list(),
    "protoc": attr.label(
    cfg = "host",
    executable = True,
    single_file = True,
    mandatory = True,
    ),
    "plugin": attr.label(
    cfg = "host",
    allow_files = True,
    executable = True,
    ),
    "plugin_language": attr.string(),
    "plugin_options": attr.string_list(),
    "gen_cc": attr.bool(),
    "gen_py": attr.bool(),
    "outs": attr.output_list(),
    },
    output_to_genfiles = True,
    implementation = _proto_gen_impl,
    )
    """Generates codes from Protocol Buffers definitions.
    This rule helps you to implement Skylark macros specific to the target
    language. You should prefer more specific `cc_proto_library `,
    `py_proto_library` and others unless you are adding such wrapper macros.
    Args:
    srcs: Protocol Buffers definition files (.proto) to run the protocol compiler
    against.
    deps: a list of dependency labels; must be other proto libraries.
    includes: a list of include paths to .proto files.
    protoc: the label of the protocol compiler to generate the sources.
    plugin: the label of the protocol compiler plugin to be passed to the protocol
    compiler.
    plugin_language: the language of the generated sources
    plugin_options: a list of options to be passed to the plugin
    gen_cc: generates C++ sources in addition to the ones from the plugin.
    gen_py: generates Python sources in addition to the ones from the plugin.
    outs: a list of labels of the expected outputs from the protocol compiler.
    """

    def cc_proto_library(
    name,
    srcs=[],
    deps=[],
    cc_libs=[],
    include=None,
    protoc="//:protoc",
    internal_bootstrap_hack=False,
    use_grpc_plugin=False,
    default_runtime="//:protobuf",
    **kargs):
    """Bazel rule to create a C++ protobuf library from proto source files
    NOTE: the rule is only an internal workaround to generate protos. The
    interface may change and the rule may be removed when bazel has introduced
    the native rule.
    Args:
    name: the name of the cc_proto_library.
    srcs: the .proto files of the cc_proto_library.
    deps: a list of dependency labels; must be cc_proto_library.
    cc_libs: a list of other cc_library targets depended by the generated
    cc_library.
    include: a string indicating the include path of the .proto files.
    protoc: the label of the protocol compiler to generate the sources.
    internal_bootstrap_hack: a flag indicate the cc_proto_library is used only
    for bootstraping. When it is set to True, no files will be generated.
    The rule will simply be a provider for .proto files, so that other
    cc_proto_library can depend on it.
    use_grpc_plugin: a flag to indicate whether to call the grpc C++ plugin
    when processing the proto files.
    default_runtime: the implicitly default runtime which will be depended on by
    the generated cc_library target.
    **kargs: other keyword arguments that are passed to cc_library.
    """

    includes = []
    if include != None:
    includes = [include]

    if internal_bootstrap_hack:
    # For pre-checked-in generated files, we add the internal_bootstrap_hack
    # which will skip the codegen action.
    proto_gen(
    name=name + "_genproto",
    srcs=srcs,
    deps=[s + "_genproto" for s in deps],
    includes=includes,
    protoc=protoc,
    visibility=["//visibility:public"],
    )
    # An empty cc_library to make rule dependency consistent.
    native.cc_library(
    name=name,
    **kargs)
    return

    grpc_cpp_plugin = None
    if use_grpc_plugin:
    grpc_cpp_plugin = "//external:grpc_cpp_plugin"

    gen_srcs = _CcSrcs(srcs, use_grpc_plugin)
    gen_hdrs = _CcHdrs(srcs, use_grpc_plugin)
    outs = gen_srcs + gen_hdrs

    proto_gen(
    name=name + "_genproto",
    srcs=srcs,
    deps=[s + "_genproto" for s in deps],
    includes=includes,
    protoc=protoc,
    plugin=grpc_cpp_plugin,
    plugin_language="grpc",
    gen_cc=1,
    outs=outs,
    visibility=["//visibility:public"],
    )

    if default_runtime and not default_runtime in cc_libs:
    cc_libs += [default_runtime]
    if use_grpc_plugin:
    cc_libs += ["//external:grpc_lib"]

    native.cc_library(
    name=name,
    srcs=gen_srcs,
    hdrs=gen_hdrs,
    deps=cc_libs + deps,
    includes=includes,
    **kargs)


    def internal_gen_well_known_protos_java(srcs):
    """Bazel rule to generate the gen_well_known_protos_java genrule
    Args:
    srcs: the well known protos
    """
    root = Label("%s//protobuf_java" % (REPOSITORY_NAME)).workspace_root
    if root == "":
    include = " -Isrc "
    else:
    include = " -I%s/src " % root
    native.genrule(
    name = "gen_well_known_protos_java",
    srcs = srcs,
    outs = [
    "wellknown.srcjar",
    ],
    cmd = "$(location :protoc) --java_out=$(@D)/wellknown.jar" +
    " %s $(SRCS) " % include +
    " && mv $(@D)/wellknown.jar $(@D)/wellknown.srcjar",
    tools = [":protoc"],
    )


    def internal_copied_filegroup(name, srcs, strip_prefix, dest, **kwargs):
    """Macro to copy files to a different directory and then create a filegroup.
    This is used by the //:protobuf_python py_proto_library target to work around
    an issue caused by Python source files that are part of the same Python
    package being in separate directories.
    Args:
    srcs: The source files to copy and add to the filegroup.
    strip_prefix: Path to the root of the files to copy.
    dest: The directory to copy the source files into.
    **kwargs: extra arguments that will be passesd to the filegroup.
    """
    outs = [_RelativeOutputPath(s, strip_prefix, dest) for s in srcs]

    native.genrule(
    name = name + "_genrule",
    srcs = srcs,
    outs = outs,
    cmd = " && ".join(
    ["cp $(location %s) $(location %s)" %
    (s, _RelativeOutputPath(s, strip_prefix, dest)) for s in srcs]),
    )

    native.filegroup(
    name = name,
    srcs = outs,
    **kwargs)


    def py_proto_library(
    name,
    srcs=[],
    deps=[],
    py_libs=[],
    py_extra_srcs=[],
    include=None,
    default_runtime="//:protobuf_python",
    protoc="//:protoc",
    **kargs):
    """Bazel rule to create a Python protobuf library from proto source files
    NOTE: the rule is only an internal workaround to generate protos. The
    interface may change and the rule may be removed when bazel has introduced
    the native rule.
    Args:
    name: the name of the py_proto_library.
    srcs: the .proto files of the py_proto_library.
    deps: a list of dependency labels; must be py_proto_library.
    py_libs: a list of other py_library targets depended by the generated
    py_library.
    py_extra_srcs: extra source files that will be added to the output
    py_library. This attribute is used for internal bootstrapping.
    include: a string indicating the include path of the .proto files.
    default_runtime: the implicitly default runtime which will be depended on by
    the generated py_library target.
    protoc: the label of the protocol compiler to generate the sources.
    **kargs: other keyword arguments that are passed to cc_library.
    """
    outs = _PyOuts(srcs)

    includes = []
    if include != None:
    includes = [include]

    proto_gen(
    name=name + "_genproto",
    srcs=srcs,
    deps=[s + "_genproto" for s in deps],
    includes=includes,
    protoc=protoc,
    gen_py=1,
    outs=outs,
    visibility=["//visibility:public"],
    )

    if default_runtime and not default_runtime in py_libs + deps:
    py_libs += [default_runtime]

    native.py_library(
    name=name,
    srcs=outs+py_extra_srcs,
    deps=py_libs+deps,
    imports=includes,
    **kargs)

    def internal_protobuf_py_tests(
    name,
    modules=[],
    **kargs):
    """Bazel rules to create batch tests for protobuf internal.
    Args:
    name: the name of the rule.
    modules: a list of modules for tests. The macro will create a py_test for
    each of the parameter with the source "google/protobuf/%s.py"
    kargs: extra parameters that will be passed into the py_test.
    """
    for m in modules:
    s = "python/google/protobuf/internal/%s.py" % m
    native.py_test(
    name="py_%s" % m,
    srcs=[s],
    main=s,
    **kargs)
  6. taylorpaul revised this gist Nov 6, 2016. 1 changed file with 141 additions and 0 deletions.
    141 changes: 141 additions & 0 deletions buildtf.sh
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,141 @@
    #installs TF and all required dependencies except CUDNN* without root!
    #*Requires signing up for account to download! (Pretty easy, but do this first!)
    #https://developer.nvidia.com/cudnn
    #Original Environment: CENTOS 6.8, non-standard GCC = 5.2.0
    #To note, I copied every binary (ld, as, etc..) required by BAZEL (see tensorflow CROSSTOOL.tpl)
    #into my GCC_DIR!

    #TODO: There are a couple TODO's listed that will be system specific!

    # Ensure we can load CUDA drivers.
    module load cuda/8.0 || { echo 'Failed to load CUDA drivers. Are you not on a compute node?' ; exit 1; }

    #TODO: GCC_DIR if not standard system gcc (which gcc)
    STARTDIR=`pwd`/tf_tools
    GCC_DIR=/work/thpaul/gcc/5.2.0
    BAZEL_BIN_DIR=/work/thpaul/bin #/bin where to copy bazel binary
    PYTHON_INSTALL_DIR=python27
    JAVA_DIR=jdk1.8.0_102 #Directory you jdk.tar file extracts too (depends on which version you DL)

    #VERSIONS
    PYTHON_VERSION=2.7.12
    JAVA_FILE=jdk-8u102-linux-x64 #Update Java version in DOWNLOADS too...
    BAZEL_VERSION=0.4.0 #TAG from github https://github.com/bazelbuild/bazel, don't use if latest release
    TF_VERSION=v0.11.0rc2 #TAG from https://github.com/tensorflow/tensorflow/releases, don't use if latest release

    #DOWNLOADS
    https://www.python.org/ftp/python/2.7.12/Python-2.7.12.tgz
    wget https://www.python.org/ftp/python/$PYTHON_VERSION/Python-$PYTHON_VERSION.tgz
    wget --no-check-certificate https://pypi.python.org/packages/source/s/setuptools/setuptools-1.4.2.tar.gz -O setuptools-1.4.2.tar.gz
    wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u102-b14/jdk-8u102-linux-x64.tar.gz

    echo "Buidling Directories"
    mkdir -p $STARTDIR
    cd $STARTDIR
    mkdir -p $PYTHON_INSTALL_DIR

    cd $PYTHON_INSTALL_DIR
    PYTHON_INSTALL_DIR=`pwd`
    cd ..

    # Set tmp directory to userspace
    mkdir -p tmp
    cd tmp
    TMPDIR=`pwd`
    cd ..

    #Unzip archives
    echo "Decompressing archives"
    tar zxvf ../Python-$PYTHON_VERSION.tgz
    tar --totals -xvf ../setuptools-1.4.2.tar.gz
    tar --totals -xvf ../$JAVA_FILE.tar.gz

    cd Python-$PYTHON_VERSION
    echo "Installing python at $PYTHON_INSTALL_DIR"
    ./configure --enable-shared --prefix=$PYTHON_INSTALL_DIR
    make
    make altinstall
    export PATH=$PYTHON_INSTALL_DIR/bin:$PATH
    cd ..

    echo "----- Installing Pip"
    cd setuptools-1.4.2
    export LD_LIBRARY_PATH=$PYTHON_INSTALL_DIR/lib:$LD_LIBRARY_PATH
    $PYTHON_INSTALL_DIR/bin/python2.7 setup.py install
    curl https://bootstrap.pypa.io/get-pip.py | $PYTHON_INSTALL_DIR/bin/python2.7 -
    pip install --no-cache-dir numpy
    cd ..

    cd $JAVA_DIR
    echo "Installing JAVA at `pwd`"
    #Save JAVA variables:
    JAVA_INSTALL_DIR=`pwd`
    export JAVA_HOME=$JAVA_INSTALL_DIR
    export JAVA_JRE=$JAVA_INSTALL_DIR/jdk1.8.0_102/jre
    export PATH=$PATH:$JAVA_INSTALL_DIR/jdk1.8.0_102/bin:$JAVA_INSTALL_DIR/jdk1.8.0_102/jre/bin
    cd ..

    echo "Compiling bazel in `pwd`/bazel"
    git clone https://github.com/bazelbuild/bazel.git
    git checkout $BAZEL_VERSION #TODO: Format Specific to your git version, only need if not using bazel latest-release

    cd bazel
    ./compile.sh
    wait #TODO: Seems to want to compile twice???
    cp ./output/bazel $BAZEL_BIN_DIR/bazel
    cd ..

    echo "Compiling tensorflow in `pwd`/tensorflow"
    git clone https://github.com/tensorflow/tensorflow.git
    git checkout $TF_VERSION #TODO: Format Specific to your git version if not latest-release

    cd tensorflow
    TF_INSTALL_DIR=`pwd`

    # TODO: Adjust the configure file only if .cache is on an NFS and clean fails:
    cp configure configure_orig #just in case
    sed -i 's/bazel clean --expunge/bazel clean --expunge_async/g' configure

    #Modify tensorflow CROSSTOOL.tpl file:
    cp $TF_INSTALL_DIR/third_party/gpus/crosstool/CROSSTOOL.tpl ./third_party/gpus/crosstool/CROSSTOOL_ORIG.tpl
    sed -i 's#/usr/bin#'$GCC_DIR'/bin#g' $TF_INSTALL_DIR/third_party/gpus/crosstool/CROSSTOOL.tpl

    #Modify tensorflow crosstool_wrapper_driver_is_not_gcc.tpl file
    cp $TF_INSTALL_DIR/third_party/gpus/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc.tpl \
    $TF_INSTALL_DIR/third_party/gpus/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc_ORIG.tpl
    sed -i 's#/usr/bin/gcc/#'$GCC_DIR'/bin/gcc#g' \
    $TF_INSTALL_DIR/third_party/gpus/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc.tpl

    ./configure
    wait
    #Can't just use the basic call from tensorflow.org install directions:
    bazel build -c opt --config=cuda --genrule_strategy=standalone --spawn_strategy=standalone //tensorflow/tools/pip_package:build_pip_package
    wait

    #TODO: When/If fails with GBLICXX... error, run this afterwards:
    PROTOFILE=$(readlink -f -- "$TF_INSTALL_DIR/bazel-tensorflow/external/protobuf/protobuf.bzl")
    cp $TF_INSTALL_DIR/bazel-tensorflow/external/protobuf/protobuf.bzl $TF_INSTALL_DIR/bazel-tensorflow/external/protobuf/ORIG_protobuf.bzl

    sed -i 's/mnemonic="ProtoCompile",/mnemonic="ProtoCompile", env=ctx.configuration.default_shell_env,/g' \
    $PROTOFILE

    bazel build -c opt --config=cuda --genrule_strategy=standalone --spawn_strategy=standalone //tensorflow/tools/pip_package:build_pip_package
    wait

    bazel-bin/tensorflow/tools/pip_package/build_pip_package $TMPDIR/tensorflow_pkg

    #Get name of the created whl file:
    for filename in $TMPDIR/tensorflow_pkg/*;
    do
    export TF_WHEEL_FILE=$filename
    done

    #Finally install TF!
    pip install $TF_WHEEL_FILE

    echo "====================CAVEATS=============================="
    echo "Don't forget to update necessary Environment Variables for in .bash_profile!"
    echo 'echo "export PATH='$JAVA_INSTALL_DIR'/bin:'$JAVA_INSTALL_DIR'jre/bin:$PATH" >> ~/.bash_profile'
    echo 'echo "export PATH='$PYTHON_INSTALL_DIR'/bin:'$BAZEL_BIN_DIR'/bin:$PATH" >> ~/.bash_profile'
    echo 'echo "export JAVA_HOME='$JAVA_INSTALL_DIR'" >> ~/.bash_profile'
    echo 'echo "export JAVA_JRE='$JAVA_INSTALL_DIR'/jre" >> ~/.bash_profile'
  7. taylorpaul revised this gist Nov 6, 2016. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion README.md
    Original file line number Diff line number Diff line change
    @@ -43,7 +43,7 @@ Again these steps led to the below error which took me forever to get past:

    As described in **gbkedar's** comment from Jul 12. You have to find this file:

    `$INSTALL_PATH/tensorflow/bazel-tensorflow/external/protobuf`
    `$INSTALL_PATH/tensorflow/bazel-tensorflow/external/protobuf/protobuf.bzl`

    But, until the compile fails this file is harder to find. The failure creates the shortcut in the `/tensorflow` directory after failure. I was running into issues re-attempting the compile and had to run `./configure` almost everytime. Therefore, I had to find this file before the first failure of my compile attempt. The file should be located somewhere similar to this after running `./configure` from the `/tensorflow` directory:

  8. taylorpaul revised this gist Nov 4, 2016. 1 changed file with 5 additions and 2 deletions.
    7 changes: 5 additions & 2 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -32,18 +32,21 @@ cp `which ld` /opt/gcc/5.2.0/bin/ld (repeat for any command listed in the crosst
    **Note2:** I downloaded a newer release of bazel and tensorflow as noted above and there are fewer changes required in the latest versions of the crosstool than described in the tutorial, I posted my final CROSSTOOL versions from these directories below:

    1) /tensorflow/third_party/gpus/crosstool/

    2) /tensorflow/third_party/gpus/crosstool/clang/bin/

    Again these steps led to the below error which took me forever to get past:

    [GLIBCXX_3.4.18 not found errro](https://github.com/bazelbuild/bazel/issues/1358)
    [GLIBCXX_3.4.18 not found error](https://github.com/bazelbuild/bazel/issues/1358)

    #### Getting Past GBLICXX_3.4.18 Error:

    As described in **gbkedar's** comment from Jul 12. You have to find this file:

    `$INSTALL_PATH/tensorflow/bazel-tensorflow/external/protobuf`

    The trick is, until the compile fails this file is harder to find. The failure creates the shortcut in the `/tensorflow` directory after failure. I was running into issues re-attempting the recompile and had to run `./configure` almost everytime. Therefore, I had to find this file before the first failure of my compile attempt. The file should be located somewhere similar to this after running `./configure` from the `/tensorflow` directory:
    But, until the compile fails this file is harder to find. The failure creates the shortcut in the `/tensorflow` directory after failure. I was running into issues re-attempting the compile and had to run `./configure` almost everytime. Therefore, I had to find this file before the first failure of my compile attempt. The file should be located somewhere similar to this after running `./configure` from the `/tensorflow` directory:

    `~/.cache/bazel/_bazel_YOURUSERNAME/YOURHASH(i.e. f81f1107f96c7515450fc43e0dbb6ed5)/external/protobuf/protobuf.bzl`

    If you have several hashes, check the files that were modified at the time corresponding to your `./configure` run.
  9. taylorpaul revised this gist Nov 4, 2016. 1 changed file with 6 additions and 3 deletions.
    9 changes: 6 additions & 3 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -1,17 +1,20 @@
    ### Environment:
    **OS:** CENTOS 6.8 (No root access)

    **GCC:** locally installed 5.2.0 (Cluster default is 4.4.7)

    **Bazel:** 0.3.2-2016-11-02 (@03afc7d)

    **Tensorflow:** v0.11.0rc1

    **CUDA:** 8.0

    **CUDNN:** 5.1.5

    ### Steps:

    #### Installing Java Locally:
    [Follow this Tutorial](http://tecadmin.net/install-java-8-on-centos-rhel-and-fedora/#)

    or download prefered version of JDK 8.0 and set proper environment variables as described in the tutorial.
    [Follow this Tutorial](http://tecadmin.net/install-java-8-on-centos-rhel-and-fedora/#) or download prefered version of JDK 8.0 and set proper environment variables as described in the tutorial.

    #### Compiling Bazel, Compiling and Installing Tensorflow:
    [Great Tutorial that got me to the error below!](http://thelazylog.com/install-tensorflow-with-gpu-support-on-sandbox-redhat/)
  10. taylorpaul created this gist Nov 4, 2016.
    248 changes: 248 additions & 0 deletions CROSSTOOL.tpl
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,248 @@
    major_version: "local"
    minor_version: ""
    default_target_cpu: "same_as_host"

    default_toolchain {
    cpu: "k8"
    toolchain_identifier: "local_linux"
    }
    default_toolchain {
    cpu: "piii"
    toolchain_identifier: "local_linux"
    }
    default_toolchain {
    cpu: "arm"
    toolchain_identifier: "local_linux"
    }
    default_toolchain {
    cpu: "darwin"
    toolchain_identifier: "local_darwin"
    }
    default_toolchain {
    cpu: "ppc"
    toolchain_identifier: "local_linux"
    }

    toolchain {
    abi_version: "local"
    abi_libc_version: "local"
    builtin_sysroot: ""
    compiler: "compiler"
    host_system_name: "local"
    needsPic: true
    supports_gold_linker: false
    supports_incremental_linker: false
    supports_fission: false
    supports_interface_shared_objects: false
    supports_normalizing_ar: false
    supports_start_end_lib: false
    supports_thin_archives: false
    target_libc: "local"
    target_cpu: "local"
    target_system_name: "local"
    toolchain_identifier: "local_linux"
    tool_path { name: "ar" path: "/work/thpaul/gcc/5.2.0/bin/ar" }
    tool_path { name: "compat-ld" path: "/work/thpaul/gcc/5.2.0/bin/ld" }
    tool_path { name: "cpp" path: "/work/thpaul/gcc/5.2.0/bin/cpp" }
    tool_path { name: "dwp" path: "/usr/bin/dwp" }
    # As part of the TensorFlow release, we place some cuda-related compilation
    # files in @local_config_cuda//crosstool/clang/bin, and this relative
    # path, combined with the rest of our Bazel configuration causes our
    # compilation to use those files.
    tool_path { name: "gcc" path: "clang/bin/crosstool_wrapper_driver_is_not_gcc" }
    # Use "-std=c++11" for nvcc. For consistency, force both the host compiler
    # and the device compiler to use "-std=c++11".
    cxx_flag: "-std=c++11"
    linker_flag: "-lstdc++"
    linker_flag: "-B/work/thpaul/gcc/5.2.0/bin" # /opt/gcc/5.2.0/bin

    %{gcc_host_compiler_includes}
    tool_path { name: "gcov" path: "/work/thpaul/gcc/5.2.0/bin/gcov" }

    # C(++) compiles invoke the compiler (as that is the one knowing where
    # to find libraries), but we provide LD so other rules can invoke the linker.
    tool_path { name: "ld" path: "/work/thpaul/gcc/5.2.0/bin/ld" }

    tool_path { name: "nm" path: "/work/thpaul/gcc/5.2.0/bin/nm" }
    tool_path { name: "objcopy" path: "/work/thpaul/gcc/5.2.0/bin/objcopy" }
    objcopy_embed_flag: "-I"
    objcopy_embed_flag: "binary"
    tool_path { name: "objdump" path: "/work/thpaul/gcc/5.2.0/bin/objdump" }
    tool_path { name: "strip" path: "/work/thpaul/gcc/5.2.0/bin/strip" }

    # Anticipated future default.
    unfiltered_cxx_flag: "-no-canonical-prefixes"

    # Make C++ compilation deterministic. Use linkstamping instead of these
    # compiler symbols.
    unfiltered_cxx_flag: "-Wno-builtin-macro-redefined"
    unfiltered_cxx_flag: "-D__DATE__=\"redacted\""
    unfiltered_cxx_flag: "-D__TIMESTAMP__=\"redacted\""
    unfiltered_cxx_flag: "-D__TIME__=\"redacted\""

    # Security hardening on by default.
    # Conservative choice; -D_FORTIFY_SOURCE=2 may be unsafe in some cases.
    # We need to undef it before redefining it as some distributions now have
    # it enabled by default.
    compiler_flag: "-U_FORTIFY_SOURCE"
    compiler_flag: "-D_FORTIFY_SOURCE=1"
    compiler_flag: "-fstack-protector"
    compiler_flag: "-fPIE"
    linker_flag: "-pie"
    linker_flag: "-Wl,-z,relro,-z,now"

    # Enable coloring even if there's no attached terminal. Bazel removes the
    # escape sequences if --nocolor is specified. This isn't supported by gcc
    # on Ubuntu 14.04.
    # compiler_flag: "-fcolor-diagnostics"

    # All warnings are enabled. Maybe enable -Werror as well?
    compiler_flag: "-Wall"
    # Enable a few more warnings that aren't part of -Wall.
    compiler_flag: "-Wunused-but-set-parameter"
    # But disable some that are problematic.
    compiler_flag: "-Wno-free-nonheap-object" # has false positives

    # Keep stack frames for debugging, even in opt mode.
    compiler_flag: "-fno-omit-frame-pointer"

    # Anticipated future default.
    linker_flag: "-no-canonical-prefixes"
    unfiltered_cxx_flag: "-fno-canonical-system-headers"
    # Have gcc return the exit code from ld.
    linker_flag: "-pass-exit-codes"
    # Stamp the binary with a unique identifier.
    linker_flag: "-Wl,--build-id=md5"
    linker_flag: "-Wl,--hash-style=gnu"
    # Gold linker only? Can we enable this by default?
    # linker_flag: "-Wl,--warn-execstack"
    # linker_flag: "-Wl,--detect-odr-violations"

    # Include directory for cuda headers.
    cxx_builtin_include_directory: "%{cuda_include_path}"

    compilation_mode_flags {
    mode: DBG
    # Enable debug symbols.
    compiler_flag: "-g"
    }
    compilation_mode_flags {
    mode: OPT
    # No debug symbols.
    # Maybe we should enable https://gcc.gnu.org/wiki/DebugFission for opt or
    # even generally? However, that can't happen here, as it requires special
    # handling in Bazel.
    compiler_flag: "-g0"
    # Conservative choice for -O
    # -O3 can increase binary size and even slow down the resulting binaries.
    # Profile first and / or use FDO if you need better performance than this.
    compiler_flag: "-O2"
    # Disable assertions
    compiler_flag: "-DNDEBUG"
    # Removal of unused code and data at link time (can this increase binary size in some cases?).
    compiler_flag: "-ffunction-sections"
    compiler_flag: "-fdata-sections"
    linker_flag: "-Wl,--gc-sections"
    }
    linking_mode_flags { mode: DYNAMIC }
    }
    toolchain {
    abi_version: "local"
    abi_libc_version: "local"
    builtin_sysroot: ""
    compiler: "compiler"
    host_system_name: "local"
    needsPic: true
    target_libc: "macosx"
    target_cpu: "darwin"
    target_system_name: "local"
    toolchain_identifier: "local_darwin"
    tool_path { name: "ar" path: "/usr/bin/libtool" }
    tool_path { name: "compat-ld" path: "/usr/bin/ld" }
    tool_path { name: "cpp" path: "/usr/bin/cpp" }
    tool_path { name: "dwp" path: "/usr/bin/dwp" }
    tool_path { name: "gcc" path: "clang/bin/crosstool_wrapper_driver_is_not_gcc" }
    cxx_flag: "-std=c++11"
    ar_flag: "-static"
    ar_flag: "-s"
    ar_flag: "-o"
    linker_flag: "-lc++"
    linker_flag: "-undefined"
    linker_flag: "dynamic_lookup"
    # TODO(ulfjack): This is wrong on so many levels. Figure out a way to auto-detect the proper
    # setting from the local compiler, and also how to make incremental builds correct.
    cxx_builtin_include_directory: "/"
    tool_path { name: "gcov" path: "/usr/bin/gcov" }
    tool_path { name: "ld" path: "/usr/bin/ld" }
    tool_path { name: "nm" path: "/usr/bin/nm" }
    tool_path { name: "objcopy" path: "/usr/bin/objcopy" }
    objcopy_embed_flag: "-I"
    objcopy_embed_flag: "binary"
    tool_path { name: "objdump" path: "/usr/bin/objdump" }
    tool_path { name: "strip" path: "/usr/bin/strip" }
    # Anticipated future default.
    unfiltered_cxx_flag: "-no-canonical-prefixes"
    # Make C++ compilation deterministic. Use linkstamping instead of these
    # compiler symbols.
    unfiltered_cxx_flag: "-Wno-builtin-macro-redefined"
    unfiltered_cxx_flag: "-D__DATE__=\"redacted\""
    unfiltered_cxx_flag: "-D__TIMESTAMP__=\"redacted\""
    unfiltered_cxx_flag: "-D__TIME__=\"redacted\""
    # Security hardening on by default.
    # Conservative choice; -D_FORTIFY_SOURCE=2 may be unsafe in some cases.
    compiler_flag: "-D_FORTIFY_SOURCE=1"
    compiler_flag: "-fstack-protector"
    # Enable coloring even if there's no attached terminal. Bazel removes the
    # escape sequences if --nocolor is specified.
    compiler_flag: "-fcolor-diagnostics"
    # All warnings are enabled. Maybe enable -Werror as well?
    compiler_flag: "-Wall"
    # Enable a few more warnings that aren't part of -Wall.
    compiler_flag: "-Wthread-safety"
    compiler_flag: "-Wself-assign"
    # Keep stack frames for debugging, even in opt mode.
    compiler_flag: "-fno-omit-frame-pointer"
    # Anticipated future default.
    linker_flag: "-no-canonical-prefixes"
    # Include directory for cuda headers.
    cxx_builtin_include_directory: "%{cuda_include_path}"
    compilation_mode_flags {
    mode: DBG
    # Enable debug symbols.
    compiler_flag: "-g"
    }
    compilation_mode_flags {
    mode: OPT
    # No debug symbols.
    # Maybe we should enable https://gcc.gnu.org/wiki/DebugFission for opt or even generally?
    # However, that can't happen here, as it requires special handling in Bazel.
    compiler_flag: "-g0"
    # Conservative choice for -O
    # -O3 can increase binary size and even slow down the resulting binaries.
    # Profile first and / or use FDO if you need better performance than this.
    compiler_flag: "-O2"
    # Disable assertions
    compiler_flag: "-DNDEBUG"
    # Removal of unused code and data at link time (can this increase binary size in some cases?).
    compiler_flag: "-ffunction-sections"
    compiler_flag: "-fdata-sections"
    }
    }
    87 changes: 87 additions & 0 deletions README.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,87 @@
    ### Environment:
    **OS:** CENTOS 6.8 (No root access)
    **GCC:** locally installed 5.2.0 (Cluster default is 4.4.7)
    **Bazel:** 0.3.2-2016-11-02 (@03afc7d)
    **Tensorflow:** v0.11.0rc1
    **CUDA:** 8.0
    **CUDNN:** 5.1.5

    ### Steps:

    #### Installing Java Locally:
    [Follow this Tutorial](http://tecadmin.net/install-java-8-on-centos-rhel-and-fedora/#)

    or download prefered version of JDK 8.0 and set proper environment variables as described in the tutorial.

    #### Compiling Bazel, Compiling and Installing Tensorflow:
    [Great Tutorial that got me to the error below!](http://thelazylog.com/install-tensorflow-with-gpu-support-on-sandbox-redhat/)

    **Note:** After change the linker line to your local or module GCC, If you get errors about finding ld, or other executables
    that are stored in `/usr/bin` here is the work around I used (it isn't pretty and you might not need it, but just in case):

    1) Copy your compiler directory (`/opt/gcc/5.2.0`) to a local directory that you have permissions to modify.

    2) Then run:
    ```
    cp `which ld` /opt/gcc/5.2.0/bin/ld (repeat for any command listed in the crosstools that doesn't already reside in your gcc /bin directory)
    ```

    **Note2:** I downloaded a newer release of bazel and tensorflow as noted above and there are fewer changes required in the latest versions of the crosstool than described in the tutorial, I posted my final CROSSTOOL versions from these directories below:

    1) /tensorflow/third_party/gpus/crosstool/
    2) /tensorflow/third_party/gpus/crosstool/clang/bin/

    Again these steps led to the below error which took me forever to get past:

    [GLIBCXX_3.4.18 not found errro](https://github.com/bazelbuild/bazel/issues/1358)

    #### Getting Past GBLICXX_3.4.18 Error:

    As described in **gbkedar's** comment from Jul 12. You have to find this file:
    `$INSTALL_PATH/tensorflow/bazel-tensorflow/external/protobuf`

    The trick is, until the compile fails this file is harder to find. The failure creates the shortcut in the `/tensorflow` directory after failure. I was running into issues re-attempting the recompile and had to run `./configure` almost everytime. Therefore, I had to find this file before the first failure of my compile attempt. The file should be located somewhere similar to this after running `./configure` from the `/tensorflow` directory:
    `~/.cache/bazel/_bazel_YOURUSERNAME/YOURHASH(i.e. f81f1107f96c7515450fc43e0dbb6ed5)/external/protobuf/protobuf.bzl`

    If you have several hashes, check the files that were modified at the time corresponding to your `./configure` run.

    As described in the error link above, search for ctx.action and add `env=ctx.configuration.default_shell_env,` at the bottom of the call like so:

    ```
    if args:
    ctx.action(
    inputs=inputs,
    outputs=ctx.outputs.outs,
    arguments=args + import_flags + [s.path for s in srcs],
    executable=ctx.executable.protoc,
    mnemonic="ProtoCompile",
    env=ctx.configuration.default_shell_env,
    )
    ```

    **Note:** protbuf.bzl file from `~/.cache/bazel/_bazel_YOURUSERNAME/YOURHASH(i.e. f81f1107f96c7515450fc43e0dbb6ed5)/external/protobuf/protobuf.bzl` posted below!

    You will then likely hit `error trying to exec 'as': execvp: No such file or directory`. Since I am a self-confessing linux noob, you have to use the few tricks you know as much as possible(I didn't follow **gbkedar's** 2nd comment):
    ```
    cp `which as` /opt/gcc/5.2.0/bin/as
    ```
    After this change, tensorflow finally compiled successfully for me!

    #### Building .whl file:

    Going back to our [tutorial](http://thelazylog.com/install-tensorflow-with-gpu-support-on-sandbox-redhat/) I ran this command:

    `bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg`

    and received the `bdist_wheel` not found error... I solved this by using pip install to install a new version of wheel locally:

    ` pip install --target=/home/thpaul/python27-packages wheel`

    and then added that directory to my $PYTHONPATH variable:

    `export PYTHONPATH=/home/thpaul/python27-packages/:$PYTHONPATH`

    Re-running the command builds the proper .whl file which you can install via pip. I did this using `virtualenv` as described [here.](https://www.tensorflow.org/versions/r0.11/get_started/os_setup.html#virtualenv-installation)
    But pointing pip to the created .whl file instead.

    Hope this helps anyone trying to compile tensorflow from source!
    250 changes: 250 additions & 0 deletions crosstool_wrapper_driver_is_not_gcc.tpl
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,250 @@
    #!/usr/bin/env python
    # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    # http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    # ==============================================================================

    """Crosstool wrapper for compiling CUDA programs.
    SYNOPSIS:
    crosstool_wrapper_is_not_gcc [options passed in by cc_library()
    or cc_binary() rule]
    DESCRIPTION:
    This script is expected to be called by the cc_library() or cc_binary() bazel
    rules. When the option "-x cuda" is present in the list of arguments passed
    to this script, it invokes the nvcc CUDA compiler. Most arguments are passed
    as is as a string to --compiler-options of nvcc. When "-x cuda" is not
    present, this wrapper invokes hybrid_driver_is_not_gcc with the input
    arguments as is.
    NOTES:
    Changes to the contents of this file must be propagated from
    //third_party/gpus/crosstool/crosstool_wrapper_is_not_gcc to
    //third_party/gpus/crosstool/v*/*/clang/bin/crosstool_wrapper_is_not_gcc
    """

    from __future__ import print_function

    __author__ = 'keveman@google.com (Manjunath Kudlur)'

    from argparse import ArgumentParser
    import os
    import subprocess
    import re
    import sys
    import pipes

    # Template values set by cuda_autoconf.
    CPU_COMPILER = ('%{cpu_compiler}')
    GCC_HOST_COMPILER_PATH = ('%{gcc_host_compiler_path}')

    CURRENT_DIR = os.path.dirname(sys.argv[0])
    NVCC_PATH = CURRENT_DIR + '/../../../cuda/bin/nvcc'
    LLVM_HOST_COMPILER_PATH = ('/work/thpaul/gcc/5.2.0/bin/gcc')
    PREFIX_DIR = os.path.dirname(GCC_HOST_COMPILER_PATH)

    def Log(s):
    print('gpus/crosstool: {0}'.format(s))


    def GetOptionValue(argv, option):
    """Extract the list of values for option from the argv list.
    Args:
    argv: A list of strings, possibly the argv passed to main().
    option: The option whose value to extract, without the leading '-'.
    Returns:
    A list of values, either directly following the option,
    (eg., -opt val1 val2) or values collected from multiple occurrences of
    the option (eg., -opt val1 -opt val2).
    """

    parser = ArgumentParser()
    parser.add_argument('-' + option, nargs='*', action='append')
    args, _ = parser.parse_known_args(argv)
    if not args or not vars(args)[option]:
    return []
    else:
    return sum(vars(args)[option], [])


    def GetHostCompilerOptions(argv):
    """Collect the -isystem, -iquote, and --sysroot option values from argv.
    Args:
    argv: A list of strings, possibly the argv passed to main().
    Returns:
    The string that can be used as the --compiler-options to nvcc.
    """

    parser = ArgumentParser()
    parser.add_argument('-isystem', nargs='*', action='append')
    parser.add_argument('-iquote', nargs='*', action='append')
    parser.add_argument('--sysroot', nargs=1)
    parser.add_argument('-g', nargs='*', action='append')
    parser.add_argument('-fno-canonical-system-headers', action='store_true')

    args, _ = parser.parse_known_args(argv)

    opts = ''

    if args.isystem:
    opts += ' -isystem ' + ' -isystem '.join(sum(args.isystem, []))
    if args.iquote:
    opts += ' -iquote ' + ' -iquote '.join(sum(args.iquote, []))
    if args.g:
    opts += ' -g' + ' -g'.join(sum(args.g, []))
    if args.fno_canonical_system_headers:
    opts += ' -fno-canonical-system-headers'
    if args.sysroot:
    opts += ' --sysroot ' + args.sysroot[0]

    return opts

    def GetNvccOptions(argv):
    """Collect the -nvcc_options values from argv.
    Args:
    argv: A list of strings, possibly the argv passed to main().
    Returns:
    The string that can be passed directly to nvcc.
    """

    parser = ArgumentParser()
    parser.add_argument('-nvcc_options', nargs='*', action='append')

    args, _ = parser.parse_known_args(argv)

    if args.nvcc_options:
    return ' '.join(['--'+a for a in sum(args.nvcc_options, [])])
    return ''


    def InvokeNvcc(argv, log=False):
    """Call nvcc with arguments assembled from argv.
    Args:
    argv: A list of strings, possibly the argv passed to main().
    log: True if logging is requested.
    Returns:
    The return value of calling os.system('nvcc ' + args)
    """

    host_compiler_options = GetHostCompilerOptions(argv)
    nvcc_compiler_options = GetNvccOptions(argv)
    opt_option = GetOptionValue(argv, 'O')
    m_options = GetOptionValue(argv, 'm')
    m_options = ''.join([' -m' + m for m in m_options if m in ['32', '64']])
    include_options = GetOptionValue(argv, 'I')
    out_file = GetOptionValue(argv, 'o')
    depfiles = GetOptionValue(argv, 'MF')
    defines = GetOptionValue(argv, 'D')
    defines = ''.join([' -D' + define for define in defines])
    undefines = GetOptionValue(argv, 'U')
    undefines = ''.join([' -U' + define for define in undefines])
    std_options = GetOptionValue(argv, 'std')
    # currently only c++11 is supported by Cuda 7.0 std argument
    nvcc_allowed_std_options = ["c++11"]
    std_options = ''.join([' -std=' + define
    for define in std_options if define in nvcc_allowed_std_options])

    # The list of source files get passed after the -c option. I don't know of
    # any other reliable way to just get the list of source files to be compiled.
    src_files = GetOptionValue(argv, 'c')

    if len(src_files) == 0:
    return 1
    if len(out_file) != 1:
    return 1

    opt = (' -O2' if (len(opt_option) > 0 and int(opt_option[0]) > 0)
    else ' -g -G')

    includes = (' -I ' + ' -I '.join(include_options)
    if len(include_options) > 0
    else '')

    # Unfortunately, there are other options that have -c prefix too.
    # So allowing only those look like C/C++ files.
    src_files = [f for f in src_files if
    re.search('\.cpp$|\.cc$|\.c$|\.cxx$|\.C$', f)]
    srcs = ' '.join(src_files)
    out = ' -o ' + out_file[0]

    supported_cuda_compute_capabilities = [ %{cuda_compute_capabilities} ]
    nvccopts = '-D_FORCE_INLINES '
    for capability in supported_cuda_compute_capabilities:
    capability = capability.replace('.', '')
    nvccopts += r'-gencode=arch=compute_%s,\"code=sm_%s,compute_%s\" ' % (
    capability, capability, capability)
    nvccopts += ' ' + nvcc_compiler_options
    nvccopts += undefines
    nvccopts += defines
    nvccopts += std_options
    nvccopts += m_options

    if depfiles:
    # Generate the dependency file
    depfile = depfiles[0]
    cmd = (NVCC_PATH + ' ' + nvccopts +
    ' --compiler-options "' + host_compiler_options + '"' +
    ' --compiler-bindir=' + GCC_HOST_COMPILER_PATH +
    ' -I .' +
    ' -x cu ' + includes + ' ' + srcs + ' -M -o ' + depfile)
    if log: Log(cmd)
    exit_status = os.system(cmd)
    if exit_status != 0:
    return exit_status

    cmd = (NVCC_PATH + ' ' + nvccopts +
    ' --compiler-options "' + host_compiler_options + ' -fPIC"' +
    ' --compiler-bindir=' + GCC_HOST_COMPILER_PATH +
    ' -I .' +
    ' -x cu ' + opt + includes + ' -c ' + srcs + out)

    # TODO(zhengxq): for some reason, 'gcc' needs this help to find 'as'.
    # Need to investigate and fix.
    cmd = 'PATH=' + PREFIX_DIR + ' ' + cmd
    if log: Log(cmd)
    return os.system(cmd)


    def main():
    parser = ArgumentParser()
    parser.add_argument('-x', nargs=1)
    parser.add_argument('--cuda_log', action='store_true')
    args, leftover = parser.parse_known_args(sys.argv[1:])

    if args.x and args.x[0] == 'cuda':
    if args.cuda_log: Log('-x cuda')
    leftover = [pipes.quote(s) for s in leftover]
    if args.cuda_log: Log('using nvcc')
    return InvokeNvcc(leftover, log=args.cuda_log)

    # Strip our flags before passing through to the CPU compiler for files which
    # are not -x cuda. We can't just pass 'leftover' because it also strips -x.
    # We not only want to pass -x to the CPU compiler, but also keep it in its
    # relative location in the argv list (the compiler is actually sensitive to
    # this).
    cpu_compiler_flags = [flag for flag in sys.argv[1:]
    if not flag.startswith(('--cuda_log'))]

    return subprocess.call([CPU_COMPILER] + cpu_compiler_flags)

    if __name__ == '__main__':
    sys.exit(main())
    376 changes: 376 additions & 0 deletions protobuf.bzl
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,376 @@
    # -*- mode: python; -*- PYTHON-PREPROCESSING-REQUIRED

    def _GetPath(ctx, path):
    if ctx.label.workspace_root:
    return ctx.label.workspace_root + '/' + path
    else:
    return path

    def _GenDir(ctx):
    if not ctx.attr.includes:
    return ctx.label.workspace_root
    if not ctx.attr.includes[0]:
    return _GetPath(ctx, ctx.label.package)
    if not ctx.label.package:
    return _GetPath(ctx, ctx.attr.includes[0])
    return _GetPath(ctx, ctx.label.package + '/' + ctx.attr.includes[0])

    def _CcHdrs(srcs, use_grpc_plugin=False):
    ret = [s[:-len(".proto")] + ".pb.h" for s in srcs]
    if use_grpc_plugin:
    ret += [s[:-len(".proto")] + ".grpc.pb.h" for s in srcs]
    return ret

    def _CcSrcs(srcs, use_grpc_plugin=False):
    ret = [s[:-len(".proto")] + ".pb.cc" for s in srcs]
    if use_grpc_plugin:
    ret += [s[:-len(".proto")] + ".grpc.pb.cc" for s in srcs]
    return ret

    def _CcOuts(srcs, use_grpc_plugin=False):
    return _CcHdrs(srcs, use_grpc_plugin) + _CcSrcs(srcs, use_grpc_plugin)

    def _PyOuts(srcs):
    return [s[:-len(".proto")] + "_pb2.py" for s in srcs]

    def _RelativeOutputPath(path, include, dest=""):
    if include == None:
    return path

    if not path.startswith(include):
    fail("Include path %s isn't part of the path %s." % (include, path))

    if include and include[-1] != '/':
    include = include + '/'
    if dest and dest[-1] != '/':
    dest = dest + '/'

    path = path[len(include):]
    return dest + path

    def _proto_gen_impl(ctx):
    """General implementation for generating protos"""
    srcs = ctx.files.srcs
    deps = []
    deps += ctx.files.srcs
    gen_dir = _GenDir(ctx)
    if gen_dir:
    import_flags = ["-I" + gen_dir, "-I" + ctx.var["GENDIR"] + "/" + gen_dir]
    else:
    import_flags = ["-I."]

    for dep in ctx.attr.deps:
    import_flags += dep.proto.import_flags
    deps += dep.proto.deps

    args = []
    if ctx.attr.gen_cc:
    args += ["--cpp_out=" + ctx.var["GENDIR"] + "/" + gen_dir]
    if ctx.attr.gen_py:
    args += ["--python_out=" + ctx.var["GENDIR"] + "/" + gen_dir]

    inputs = srcs + deps
    if ctx.executable.plugin:
    plugin = ctx.executable.plugin
    lang = ctx.attr.plugin_language
    if not lang and plugin.basename.startswith('protoc-gen-'):
    lang = plugin.basename[len('protoc-gen-'):]
    if not lang:
    fail("cannot infer the target language of plugin", "plugin_language")

    outdir = ctx.var["GENDIR"] + "/" + gen_dir
    if ctx.attr.plugin_options:
    outdir = ",".join(ctx.attr.plugin_options) + ":" + outdir
    args += ["--plugin=protoc-gen-%s=%s" % (lang, plugin.path)]
    args += ["--%s_out=%s" % (lang, outdir)]
    inputs += [plugin]

    if args:
    ctx.action(
    inputs=inputs,
    outputs=ctx.outputs.outs,
    arguments=args + import_flags + [s.path for s in srcs],
    executable=ctx.executable.protoc,
    mnemonic="ProtoCompile",
    env=ctx.configuration.default_shell_env,
    )

    return struct(
    proto=struct(
    srcs=srcs,
    import_flags=import_flags,
    deps=deps,
    ),
    )

    proto_gen = rule(
    attrs = {
    "srcs": attr.label_list(allow_files = True),
    "deps": attr.label_list(providers = ["proto"]),
    "includes": attr.string_list(),
    "protoc": attr.label(
    cfg = "host",
    executable = True,
    single_file = True,
    mandatory = True,
    ),
    "plugin": attr.label(
    cfg = "host",
    allow_files = True,
    executable = True,
    ),
    "plugin_language": attr.string(),
    "plugin_options": attr.string_list(),
    "gen_cc": attr.bool(),
    "gen_py": attr.bool(),
    "outs": attr.output_list(),
    },
    output_to_genfiles = True,
    implementation = _proto_gen_impl,
    )
    """Generates codes from Protocol Buffers definitions.
    This rule helps you to implement Skylark macros specific to the target
    language. You should prefer more specific `cc_proto_library `,
    `py_proto_library` and others unless you are adding such wrapper macros.
    Args:
    srcs: Protocol Buffers definition files (.proto) to run the protocol compiler
    against.
    deps: a list of dependency labels; must be other proto libraries.
    includes: a list of include paths to .proto files.
    protoc: the label of the protocol compiler to generate the sources.
    plugin: the label of the protocol compiler plugin to be passed to the protocol
    compiler.
    plugin_language: the language of the generated sources
    plugin_options: a list of options to be passed to the plugin
    gen_cc: generates C++ sources in addition to the ones from the plugin.
    gen_py: generates Python sources in addition to the ones from the plugin.
    outs: a list of labels of the expected outputs from the protocol compiler.
    """

    def cc_proto_library(
    name,
    srcs=[],
    deps=[],
    cc_libs=[],
    include=None,
    protoc="//:protoc",
    internal_bootstrap_hack=False,
    use_grpc_plugin=False,
    default_runtime="//:protobuf",
    **kargs):
    """Bazel rule to create a C++ protobuf library from proto source files
    NOTE: the rule is only an internal workaround to generate protos. The
    interface may change and the rule may be removed when bazel has introduced
    the native rule.
    Args:
    name: the name of the cc_proto_library.
    srcs: the .proto files of the cc_proto_library.
    deps: a list of dependency labels; must be cc_proto_library.
    cc_libs: a list of other cc_library targets depended by the generated
    cc_library.
    include: a string indicating the include path of the .proto files.
    protoc: the label of the protocol compiler to generate the sources.
    internal_bootstrap_hack: a flag indicate the cc_proto_library is used only
    for bootstraping. When it is set to True, no files will be generated.
    The rule will simply be a provider for .proto files, so that other
    cc_proto_library can depend on it.
    use_grpc_plugin: a flag to indicate whether to call the grpc C++ plugin
    when processing the proto files.
    default_runtime: the implicitly default runtime which will be depended on by
    the generated cc_library target.
    **kargs: other keyword arguments that are passed to cc_library.
    """

    includes = []
    if include != None:
    includes = [include]

    if internal_bootstrap_hack:
    # For pre-checked-in generated files, we add the internal_bootstrap_hack
    # which will skip the codegen action.
    proto_gen(
    name=name + "_genproto",
    srcs=srcs,
    deps=[s + "_genproto" for s in deps],
    includes=includes,
    protoc=protoc,
    visibility=["//visibility:public"],
    )
    # An empty cc_library to make rule dependency consistent.
    native.cc_library(
    name=name,
    **kargs)
    return

    grpc_cpp_plugin = None
    if use_grpc_plugin:
    grpc_cpp_plugin = "//external:grpc_cpp_plugin"

    gen_srcs = _CcSrcs(srcs, use_grpc_plugin)
    gen_hdrs = _CcHdrs(srcs, use_grpc_plugin)
    outs = gen_srcs + gen_hdrs

    proto_gen(
    name=name + "_genproto",
    srcs=srcs,
    deps=[s + "_genproto" for s in deps],
    includes=includes,
    protoc=protoc,
    plugin=grpc_cpp_plugin,
    plugin_language="grpc",
    gen_cc=1,
    outs=outs,
    visibility=["//visibility:public"],
    )

    if default_runtime and not default_runtime in cc_libs:
    cc_libs += [default_runtime]
    if use_grpc_plugin:
    cc_libs += ["//external:grpc_lib"]

    native.cc_library(
    name=name,
    srcs=gen_srcs,
    hdrs=gen_hdrs,
    deps=cc_libs + deps,
    includes=includes,
    **kargs)


    def internal_gen_well_known_protos_java(srcs):
    """Bazel rule to generate the gen_well_known_protos_java genrule
    Args:
    srcs: the well known protos
    """
    root = Label("%s//protobuf_java" % (REPOSITORY_NAME)).workspace_root
    if root == "":
    include = " -Isrc "
    else:
    include = " -I%s/src " % root
    native.genrule(
    name = "gen_well_known_protos_java",
    srcs = srcs,
    outs = [
    "wellknown.srcjar",
    ],
    cmd = "$(location :protoc) --java_out=$(@D)/wellknown.jar" +
    " %s $(SRCS) " % include +
    " && mv $(@D)/wellknown.jar $(@D)/wellknown.srcjar",
    tools = [":protoc"],
    )


    def internal_copied_filegroup(name, srcs, strip_prefix, dest, **kwargs):
    """Macro to copy files to a different directory and then create a filegroup.
    This is used by the //:protobuf_python py_proto_library target to work around
    an issue caused by Python source files that are part of the same Python
    package being in separate directories.
    Args:
    srcs: The source files to copy and add to the filegroup.
    strip_prefix: Path to the root of the files to copy.
    dest: The directory to copy the source files into.
    **kwargs: extra arguments that will be passesd to the filegroup.
    """
    outs = [_RelativeOutputPath(s, strip_prefix, dest) for s in srcs]

    native.genrule(
    name = name + "_genrule",
    srcs = srcs,
    outs = outs,
    cmd = " && ".join(
    ["cp $(location %s) $(location %s)" %
    (s, _RelativeOutputPath(s, strip_prefix, dest)) for s in srcs]),
    )

    native.filegroup(
    name = name,
    srcs = outs,
    **kwargs)


    def py_proto_library(
    name,
    srcs=[],
    deps=[],
    py_libs=[],
    py_extra_srcs=[],
    include=None,
    default_runtime="//:protobuf_python",
    protoc="//:protoc",
    **kargs):
    """Bazel rule to create a Python protobuf library from proto source files
    NOTE: the rule is only an internal workaround to generate protos. The
    interface may change and the rule may be removed when bazel has introduced
    the native rule.
    Args:
    name: the name of the py_proto_library.
    srcs: the .proto files of the py_proto_library.
    deps: a list of dependency labels; must be py_proto_library.
    py_libs: a list of other py_library targets depended by the generated
    py_library.
    py_extra_srcs: extra source files that will be added to the output
    py_library. This attribute is used for internal bootstrapping.
    include: a string indicating the include path of the .proto files.
    default_runtime: the implicitly default runtime which will be depended on by
    the generated py_library target.
    protoc: the label of the protocol compiler to generate the sources.
    **kargs: other keyword arguments that are passed to cc_library.
    """
    outs = _PyOuts(srcs)

    includes = []
    if include != None:
    includes = [include]

    proto_gen(
    name=name + "_genproto",
    srcs=srcs,
    deps=[s + "_genproto" for s in deps],
    includes=includes,
    protoc=protoc,
    gen_py=1,
    outs=outs,
    visibility=["//visibility:public"],
    )

    if default_runtime and not default_runtime in py_libs + deps:
    py_libs += [default_runtime]

    native.py_library(
    name=name,
    srcs=outs+py_extra_srcs,
    deps=py_libs+deps,
    imports=includes,
    **kargs)

    def internal_protobuf_py_tests(
    name,
    modules=[],
    **kargs):
    """Bazel rules to create batch tests for protobuf internal.
    Args:
    name: the name of the rule.
    modules: a list of modules for tests. The macro will create a py_test for
    each of the parameter with the source "google/protobuf/%s.py"
    kargs: extra parameters that will be passed into the py_test.
    """
    for m in modules:
    s = "python/google/protobuf/internal/%s.py" % m
    native.py_test(
    name="py_%s" % m,
    srcs=[s],
    main=s,
    **kargs)