Skip to content

Instantly share code, notes, and snippets.

@briansp2020
Created July 31, 2019 23:28
Show Gist options
  • Save briansp2020/92cbb2ce5a7ca34ebd5383e9f80c4ae7 to your computer and use it in GitHub Desktop.
Save briansp2020/92cbb2ce5a7ca34ebd5383e9f80c4ae7 to your computer and use it in GitHub Desktop.
This file has been truncated, but you can view the full file.
-- The CXX compiler identification is Clang 7.1.0
-- The C compiler identification is Clang 7.1.0
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Not forcing any particular BLAS to be found
-- Performing Test COMPILER_WORKS
-- Performing Test COMPILER_WORKS - Success
-- Performing Test SUPPORT_GLIBCXX_USE_C99
-- Performing Test SUPPORT_GLIBCXX_USE_C99 - Success
-- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED
-- Performing Test CAFFE2_EXCEPTION_PTR_SUPPORTED - Success
-- std::exception_ptr is supported.
-- Performing Test CAFFE2_IS_NUMA_AVAILABLE
-- Performing Test CAFFE2_IS_NUMA_AVAILABLE - Success
-- NUMA is available
-- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING
-- Performing Test CAFFE2_NEED_TO_TURN_OFF_DEPRECATION_WARNING - Success
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX2_EXTENSIONS - Success
-- Current compiler supports avx2 extension. Will build perfkernels.
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS
-- Performing Test CAFFE2_COMPILER_SUPPORTS_AVX512_EXTENSIONS - Success
-- Current compiler supports avx512f extension. Will build fbgemm.
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_VISIBILITY - Success
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY
-- Performing Test COMPILER_SUPPORTS_HIDDEN_INLINE_VISIBILITY - Success
-- Performing Test COMPILER_SUPPORTS_RDYNAMIC
-- Performing Test COMPILER_SUPPORTS_RDYNAMIC - Success
-- Building using own protobuf under third_party per request.
-- Use custom protobuf build.
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Caffe2 protobuf include directory: $<BUILD_INTERFACE:/root/pytorch/third_party/protobuf/src>$<INSTALL_INTERFACE:include>
-- Trying to find preferred BLAS backend of choice: MKL
-- MKL_THREADING = OMP
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of void*
-- Check size of void* - done
-- Looking for cblas_sgemm
-- Looking for cblas_sgemm - found
-- MKL libraries: /usr/local/lib/libmkl_intel_lp64.so;/usr/local/lib/libmkl_intel_thread.so;/usr/local/lib/libmkl_core.so;/usr/local/lib/libiomp5.so;/usr/lib/x86_64-linux-gnu/libpthread.so;/usr/lib/x86_64-linux-gnu/libm.so;/usr/lib/x86_64-linux-gnu/libdl.so
-- MKL include directory: /usr/local/include
-- MKL OpenMP type: Intel
-- MKL OpenMP library: /usr/local/lib/libiomp5.so
-- The ASM compiler identification is Clang
-- Found assembler: /usr/bin/cc
-- Check if compiler accepts -pthread
-- Check if compiler accepts -pthread - yes
-- Brace yourself, we are building NNPACK
-- Performing Test NNPACK_ARCH_IS_X86_32
-- Performing Test NNPACK_ARCH_IS_X86_32 - Failed
-- Found PythonInterp: /usr/bin/python3 (found version "3.6.8")
-- NNPACK backend is x86-64
-- Failed to find LLVM FileCheck
-- Found Git: /usr/bin/git (found version "2.7.4")
-- git Version: v1.4.0-505be96a
-- Version: 1.4.0
-- Performing Test HAVE_CXX_FLAG_STD_CXX11
-- Performing Test HAVE_CXX_FLAG_STD_CXX11 - Success
-- Performing Test HAVE_CXX_FLAG_WALL
-- Performing Test HAVE_CXX_FLAG_WALL - Success
-- Performing Test HAVE_CXX_FLAG_WEXTRA
-- Performing Test HAVE_CXX_FLAG_WEXTRA - Success
-- Performing Test HAVE_CXX_FLAG_WSHADOW
-- Performing Test HAVE_CXX_FLAG_WSHADOW - Success
-- Performing Test HAVE_CXX_FLAG_WERROR
-- Performing Test HAVE_CXX_FLAG_WERROR - Success
-- Performing Test HAVE_CXX_FLAG_PEDANTIC
-- Performing Test HAVE_CXX_FLAG_PEDANTIC - Success
-- Performing Test HAVE_CXX_FLAG_PEDANTIC_ERRORS
-- Performing Test HAVE_CXX_FLAG_PEDANTIC_ERRORS - Success
-- Performing Test HAVE_CXX_FLAG_WSHORTEN_64_TO_32
-- Performing Test HAVE_CXX_FLAG_WSHORTEN_64_TO_32 - Success
-- Performing Test HAVE_CXX_FLAG_WFLOAT_EQUAL
-- Performing Test HAVE_CXX_FLAG_WFLOAT_EQUAL - Success
-- Performing Test HAVE_CXX_FLAG_FSTRICT_ALIASING
-- Performing Test HAVE_CXX_FLAG_FSTRICT_ALIASING - Success
-- Performing Test HAVE_CXX_FLAG_WNO_DEPRECATED_DECLARATIONS
-- Performing Test HAVE_CXX_FLAG_WNO_DEPRECATED_DECLARATIONS - Success
-- Performing Test HAVE_CXX_FLAG_WSTRICT_ALIASING
-- Performing Test HAVE_CXX_FLAG_WSTRICT_ALIASING - Success
-- Performing Test HAVE_CXX_FLAG_WD654
-- Performing Test HAVE_CXX_FLAG_WD654 - Failed
-- Performing Test HAVE_CXX_FLAG_WTHREAD_SAFETY
-- Performing Test HAVE_CXX_FLAG_WTHREAD_SAFETY - Success
-- Performing Test HAVE_THREAD_SAFETY_ATTRIBUTES
-- Performing Test HAVE_THREAD_SAFETY_ATTRIBUTES
-- Performing Test HAVE_THREAD_SAFETY_ATTRIBUTES -- failed to compile
-- Performing Test HAVE_CXX_FLAG_COVERAGE
-- Performing Test HAVE_CXX_FLAG_COVERAGE - Success
-- Performing Test HAVE_STD_REGEX
-- Performing Test HAVE_STD_REGEX
-- Performing Test HAVE_STD_REGEX -- success
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX
-- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX
-- Performing Test HAVE_POSIX_REGEX -- success
-- Performing Test HAVE_STEADY_CLOCK
-- Performing Test HAVE_STEADY_CLOCK
-- Performing Test HAVE_STEADY_CLOCK -- success
-- Performing Test COMPILER_SUPPORTS_AVX512
-- Performing Test COMPILER_SUPPORTS_AVX512 - Success
-- Found OpenMP_C: -fopenmp=libomp (found version "3.1")
-- Found OpenMP_CXX: -fopenmp=libomp (found version "3.1")
-- Found OpenMP: TRUE (found version "3.1")
CMake Warning (dev) at third_party/fbgemm/third_party/asmjit/CMakeLists.txt:34 (set):
implicitly converting 'BOOLEAN' to 'STRING' type.
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at third_party/fbgemm/third_party/asmjit/CMakeLists.txt:35 (set):
implicitly converting 'BOOLEAN' to 'STRING' type.
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at third_party/fbgemm/third_party/asmjit/CMakeLists.txt:36 (set):
implicitly converting 'BOOLEAN' to 'STRING' type.
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at third_party/fbgemm/third_party/asmjit/CMakeLists.txt:37 (set):
implicitly converting 'BOOLEAN' to 'STRING' type.
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at third_party/fbgemm/third_party/asmjit/CMakeLists.txt:38 (set):
implicitly converting 'BOOLEAN' to 'STRING' type.
This warning is for project developers. Use -Wno-dev to suppress it.
-- Performing Test __CxxFlag__msse
-- Performing Test __CxxFlag__msse - Success
-- Performing Test __CxxFlag__msse2
-- Performing Test __CxxFlag__msse2 - Success
-- Performing Test __CxxFlag__msse3
-- Performing Test __CxxFlag__msse3 - Success
-- Performing Test __CxxFlag__mssse3
-- Performing Test __CxxFlag__mssse3 - Success
-- Performing Test __CxxFlag__msse4_1
-- Performing Test __CxxFlag__msse4_1 - Success
-- Performing Test __CxxFlag__msse4_2
-- Performing Test __CxxFlag__msse4_2 - Success
-- Performing Test __CxxFlag__mavx
-- Performing Test __CxxFlag__mavx - Success
-- Performing Test __CxxFlag__mavx2
-- Performing Test __CxxFlag__mavx2 - Success
-- Performing Test __CxxFlag__std_c__17
-- Performing Test __CxxFlag__std_c__17 - Success
-- Performing Test __CxxFlag__std_c__14
-- Performing Test __CxxFlag__std_c__14 - Success
-- Performing Test __CxxFlag__std_c__11
-- Performing Test __CxxFlag__std_c__11 - Success
-- Performing Test __CxxFlag__std_c__0x
-- Performing Test __CxxFlag__std_c__0x - Success
-- Performing Test __CxxFlag__fno_tree_vectorize
-- Performing Test __CxxFlag__fno_tree_vectorize - Success
-- Performing Test __CxxFlag__fvisibility_hidden
-- Performing Test __CxxFlag__fvisibility_hidden - Success
-- Performing Test __CxxFlag__Winconsistent_missing_override
-- Performing Test __CxxFlag__Winconsistent_missing_override - Success
-- Performing Test __CxxFlag__O2
-- Performing Test __CxxFlag__O2 - Success
-- Performing Test __CxxFlag__fno_keep_static_consts
-- Performing Test __CxxFlag__fno_keep_static_consts - Failed
-- Performing Test __CxxFlag__fmerge_all_constants
-- Performing Test __CxxFlag__fmerge_all_constants - Success
-- [asmjit]
BuildMode=Static
BuildTest=Off
ASMJIT_DIR=/root/pytorch/third_party/fbgemm/third_party/asmjit
ASMJIT_DEPS=pthread;rt
ASMJIT_LIBS=asmjit;pthread;rt
ASMJIT_CFLAGS=-DASMJIT_STATIC
ASMJIT_SOURCE_DIR=/root/pytorch/third_party/fbgemm/third_party/asmjit/src
ASMJIT_INCLUDE_DIR=/root/pytorch/third_party/fbgemm/third_party/asmjit/src
ASMJIT_PRIVATE_CFLAGS=
-DASMJIT_STATIC
-std=c++17
-fno-tree-vectorize
-fvisibility=hidden
-Winconsistent-missing-override
-O2 [RELEASE]
-fmerge-all-constants [RELEASE]
-- Found LMDB: /usr/include
-- Found lmdb (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/liblmdb.so)
-- Found Numa: /usr/include
-- Found Numa (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libnuma.so)
-- OpenCV found (/usr/share/OpenCV)
-- Using third party subdirectory Eigen.
Python 3.6.8
-- Found PythonInterp: /usr/bin/python3 (found suitable version "3.6.8", minimum required is "2.7")
-- Found PythonLibs: /usr/lib/libpython3.6m.so.1.0 (found suitable version "3.6.8", minimum required is "2.7")
-- Could NOT find pybind11 (missing: pybind11_DIR)
-- Could NOT find pybind11 (missing: pybind11_INCLUDE_DIR)
-- Using third_party/pybind11.
-- Could NOT find MPI_C (missing: MPI_C_LIB_NAMES MPI_C_HEADER_DIR MPI_C_WORKS)
-- Could NOT find MPI_CXX (missing: MPI_CXX_LIB_NAMES MPI_CXX_HEADER_DIR MPI_CXX_WORKS)
-- Could NOT find MPI (missing: MPI_C_FOUND MPI_CXX_FOUND)
CMake Warning at cmake/Dependencies.cmake:741 (message):
Not compiling with MPI. Suppress this warning with -DUSE_MPI=OFF
Call Stack (most recent call first):
CMakeLists.txt:295 (include)
-- Adding OpenMP CXX_FLAGS: -fopenmp=libomp
-- Will link against OpenMP libraries: /usr/local/lib/libiomp5.so
-- Found HIP: /opt/rocm/hip (found suitable version "1.5.19255", minimum required is "1.0")
HIP VERSION: 1.5.19255
***** Library versions from dpkg *****
rocm-dev VERSION: 2.6.22
rocm-device-libs VERSION: 0.0.1
rocm-libs VERSION: 2.6.22
hsakmt-roct VERSION: 1.0.9-171-g4be439e
hsakmt-roct-dev VERSION: 1.0.9-171-g4be439e
hsa-ext-rocr-dev VERSION: 1.1.9-87-g1566fdd
hsa-rocr-dev VERSION: 1.1.9-87-g1566fdd
hcc VERSION: 1.3.19242
hip_base VERSION: 1.5.19255
hip_hcc VERSION: 1.5.19255
***** Library versions from cmake find_package *****
hiprand VERSION: 2.6.0
rocblas VERSION: 2.2.11.0
miopen VERSION: 2.0.0-7a8f787
rocfft VERSION: 0.9.4.0
-- Found PkgConfig: /usr/bin/pkg-config (found version "0.29.1")
hipsparse VERSION: 1.0.8.0
INFOCompiling with HIP for AMD.
CMake Warning (dev) at third_party/gloo/CMakeLists.txt:21 (option):
Policy CMP0077 is not set: option() honors normal variables. Run "cmake
--help-policy CMP0077" for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
For compatibility with older versions of CMake, option is clearing the
normal variable 'BUILD_BENCHMARK'.
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at third_party/gloo/CMakeLists.txt:31 (option):
Policy CMP0077 is not set: option() honors normal variables. Run "cmake
--help-policy CMP0077" for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
For compatibility with older versions of CMake, option is clearing the
normal variable 'USE_NCCL'.
This warning is for project developers. Use -Wno-dev to suppress it.
Successfully preprocessed all matching files.
CMake Warning at cmake/Dependencies.cmake:1010 (message):
Metal is only used in ios builds.
Call Stack (most recent call first):
CMakeLists.txt:295 (include)
--
-- ******** Summary ********
-- CMake version : 3.14.4
-- CMake command : /usr/local/lib/python3.6/dist-packages/cmake/data/bin/cmake
-- System : Linux
-- C++ compiler : /usr/bin/c++
-- C++ compiler version : 7.1.0
-- CXX flags : -fvisibility-inlines-hidden -fopenmp=libomp -Wnon-virtual-dtor
-- Build type : Release
-- Compile definitions : TH_BLAS_MKL;NDEBUG;ONNX_ML=1
-- CMAKE_PREFIX_PATH : /usr/lib/python3/dist-packages
-- CMAKE_INSTALL_PREFIX : /root/pytorch/torch
-- CMAKE_MODULE_PATH : /opt/rocm/hip/cmake;/root/pytorch/cmake/Modules
--
-- ONNX version : 1.5.0
-- ONNX NAMESPACE : onnx_torch
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_BENCHMARKS : OFF
-- ONNX_USE_LITE_PROTO : OFF
-- ONNXIFI_DUMMY_BACKEND : OFF
-- ONNXIFI_ENABLE_EXT : OFF
--
-- Protobuf compiler :
-- Protobuf includes :
-- Protobuf libraries :
-- BUILD_ONNX_PYTHON : OFF
--
-- ******** Summary ********
-- CMake version : 3.14.4
-- CMake command : /usr/local/lib/python3.6/dist-packages/cmake/data/bin/cmake
-- System : Linux
-- C++ compiler : /usr/bin/c++
-- C++ compiler version : 7.1.0
-- CXX flags : -fvisibility-inlines-hidden -fopenmp=libomp -Wnon-virtual-dtor
-- Build type : Release
-- Compile definitions : TH_BLAS_MKL;NDEBUG;ONNX_ML=1
-- CMAKE_PREFIX_PATH : /usr/lib/python3/dist-packages
-- CMAKE_INSTALL_PREFIX : /root/pytorch/torch
-- CMAKE_MODULE_PATH : /opt/rocm/hip/cmake;/root/pytorch/cmake/Modules
--
-- ONNX version : 1.4.1
-- ONNX NAMESPACE : onnx_torch
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_BENCHMARKS : OFF
-- ONNX_USE_LITE_PROTO : OFF
-- ONNXIFI_DUMMY_BACKEND : OFF
--
-- Protobuf compiler :
-- Protobuf includes :
-- Protobuf libraries :
-- BUILD_ONNX_PYTHON : OFF
-- Could not find CUDA with FP16 support, compiling without torch.CudaHalfTensor
-- Removing -DNDEBUG from compile flags
-- MAGMA not found. Compiling without MAGMA support
-- Could not find hardware support for NEON on this machine.
-- No OMAP3 processor on this machine.
-- No OMAP4 processor on this machine.
-- Looking for cpuid.h
-- Looking for cpuid.h - found
-- Performing Test HAVE_GCC_GET_CPUID
-- Performing Test HAVE_GCC_GET_CPUID - Success
-- Performing Test NO_GCC_EBX_FPIC_BUG
-- Performing Test NO_GCC_EBX_FPIC_BUG - Success
-- Performing Test C_HAS_AVX_1
-- Performing Test C_HAS_AVX_1 - Failed
-- Performing Test C_HAS_AVX_2
-- Performing Test C_HAS_AVX_2 - Success
-- Performing Test C_HAS_AVX2_1
-- Performing Test C_HAS_AVX2_1 - Failed
-- Performing Test C_HAS_AVX2_2
-- Performing Test C_HAS_AVX2_2 - Success
-- Performing Test CXX_HAS_AVX_1
-- Performing Test CXX_HAS_AVX_1 - Failed
-- Performing Test CXX_HAS_AVX_2
-- Performing Test CXX_HAS_AVX_2 - Success
-- Performing Test CXX_HAS_AVX2_1
-- Performing Test CXX_HAS_AVX2_1 - Failed
-- Performing Test CXX_HAS_AVX2_2
-- Performing Test CXX_HAS_AVX2_2 - Success
-- AVX compiler support found
-- AVX2 compiler support found
-- Performing Test BLAS_F2C_DOUBLE_WORKS
-- Performing Test BLAS_F2C_DOUBLE_WORKS - Failed
-- Performing Test BLAS_F2C_FLOAT_WORKS
-- Performing Test BLAS_F2C_FLOAT_WORKS - Success
-- Performing Test BLAS_USE_CBLAS_DOT
-- Performing Test BLAS_USE_CBLAS_DOT - Success
-- Found a library with BLAS API (mkl).
-- Found a library with LAPACK API (mkl).
disabling CUDA because NOT USE_CUDA is set
-- CuDNN not found. Compiling without CuDNN support
-- MKLDNN_THREADING = OMP:COMP
CMake Warning (dev) at third_party/ideep/mkl-dnn/cmake/options.cmake:33 (option):
Policy CMP0077 is not set: option() honors normal variables. Run "cmake
--help-policy CMP0077" for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
For compatibility with older versions of CMake, option is clearing the
normal variable 'MKLDNN_ENABLE_CONCURRENT_EXEC'.
Call Stack (most recent call first):
third_party/ideep/mkl-dnn/cmake/utils.cmake:24 (include)
third_party/ideep/mkl-dnn/CMakeLists.txt:74 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
-- This is a product build
-- Found OpenMP_C: -fopenmp=libomp
-- Found OpenMP_CXX: -fopenmp=libomp
-- Found OpenMP: TRUE
-- OpenMP lib: provided by compiler
-- Could NOT find Doxygen (missing: DOXYGEN_EXECUTABLE)
-- VTune profiling environment is unset
-- Found MKL-DNN: TRUE
-- Looking for clock_gettime in rt
-- Looking for clock_gettime in rt - found
-- Looking for mmap
-- Looking for mmap - found
-- Looking for shm_open
-- Looking for shm_open - found
-- Looking for shm_unlink
-- Looking for shm_unlink - found
-- Looking for malloc_usable_size
-- Looking for malloc_usable_size - found
-- Performing Test C_HAS_THREAD
-- Performing Test C_HAS_THREAD - Success
-- NUMA paths:
-- /usr/include
-- /usr/lib/x86_64-linux-gnu/libnuma.so
-- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT
-- Performing Test COMPILER_SUPPORTS_NO_AVX256_SPLIT - Failed
HIP VERSION: 1.5.19255
***** Library versions from dpkg *****
rocm-dev VERSION: 2.6.22
rocm-device-libs VERSION: 0.0.1
rocm-libs VERSION: 2.6.22
hsakmt-roct VERSION: 1.0.9-171-g4be439e
hsakmt-roct-dev VERSION: 1.0.9-171-g4be439e
hsa-ext-rocr-dev VERSION: 1.1.9-87-g1566fdd
hsa-rocr-dev VERSION: 1.1.9-87-g1566fdd
hcc VERSION: 1.3.19242
hip_base VERSION: 1.5.19255
hip_hcc VERSION: 1.5.19255
***** Library versions from cmake find_package *****
hiprand VERSION: 2.6.0
rocblas VERSION: 2.2.11.0
miopen VERSION: 2.0.0-7a8f787
rocfft VERSION: 0.9.4.0
hipsparse VERSION: 1.0.8.0
ROCm is enabled.
-- Check size of long double
-- Check size of long double - done
-- Performing Test COMPILER_SUPPORTS_LONG_DOUBLE
-- Performing Test COMPILER_SUPPORTS_LONG_DOUBLE - Success
-- Performing Test COMPILER_SUPPORTS_FLOAT128
-- Performing Test COMPILER_SUPPORTS_FLOAT128 - Success
-- Performing Test COMPILER_SUPPORTS_SSE2
-- Performing Test COMPILER_SUPPORTS_SSE2 - Success
-- Performing Test COMPILER_SUPPORTS_SSE4
-- Performing Test COMPILER_SUPPORTS_SSE4 - Success
-- Performing Test COMPILER_SUPPORTS_AVX
-- Performing Test COMPILER_SUPPORTS_AVX - Success
-- Performing Test COMPILER_SUPPORTS_FMA4
-- Performing Test COMPILER_SUPPORTS_FMA4 - Success
-- Performing Test COMPILER_SUPPORTS_AVX2
-- Performing Test COMPILER_SUPPORTS_AVX2 - Success
-- Performing Test COMPILER_SUPPORTS_SVE
-- Performing Test COMPILER_SUPPORTS_SVE - Failed
-- Performing Test COMPILER_SUPPORTS_AVX512F
-- Performing Test COMPILER_SUPPORTS_AVX512F - Success
-- Performing Test COMPILER_SUPPORTS_OPENMP
-- Performing Test COMPILER_SUPPORTS_OPENMP - Success
-- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES
-- Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Success
-- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH
-- Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Failed
-- Configuring build for SLEEF-v3.2
Target system: Linux-4.18.0-25-generic
Target processor: x86_64
Host system: Linux-4.18.0-25-generic
Host processor: x86_64
Detected C compiler: Clang @ /usr/bin/cc
-- Using option `-Wall -Wno-unused -Wno-attributes -Wno-unused-result -ffp-contract=off -fno-math-errno -fno-trapping-math` to compile libsleef
-- Building shared libs : OFF
-- MPFR : LIB_MPFR-NOTFOUND
-- GMP : LIBGMP-NOTFOUND
-- RUNNING_ON_TRAVIS : 0
-- COMPILER_SUPPORTS_OPENMP : 1
-- NCCL operators skipped due to no CUDA support
-- Including IDEEP operators
-- Including image processing operators
-- Excluding video processing operators due to no opencv
-- MPI operators skipped due to no MPI support
-- Include Observer library
-- pytorch is compiling with OpenMP.
OpenMP CXX_FLAGS: -fopenmp=libomp.
OpenMP libraries: /usr/local/lib/libiomp5.so.
-- Caffe2 is compiling with OpenMP.
OpenMP CXX_FLAGS: -fopenmp=libomp.
OpenMP libraries: /usr/local/lib/libiomp5.so.
-- Using ATen parallel backend: OMP
-- Using lib/python3/dist-packages as python relative installation path
CMake Warning at CMakeLists.txt:512 (message):
Generated cmake files are only fully tested if one builds with system glog,
gflags, and protobuf. Other settings may generate files that are not well
tested.
--
-- ******** Summary ********
-- General:
-- CMake version : 3.14.4
-- CMake command : /usr/local/lib/python3.6/dist-packages/cmake/data/bin/cmake
-- System : Linux
-- C++ compiler : /usr/bin/c++
-- C++ compiler id : Clang
-- C++ compiler version : 7.1.0
-- BLAS : MKL
-- CXX flags : -fvisibility-inlines-hidden -fopenmp=libomp -DUSE_FBGEMM -DUSE_QNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-typedef-redefinition -Wno-unknown-warning-option -Wno-unused-private-field -Wno-inconsistent-missing-override -Wno-aligned-allocation-unavailable -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces -Qunused-arguments -fcolor-diagnostics -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math
-- Build type : Release
-- Compile definitions : TH_BLAS_MKL;NDEBUG;ONNX_ML=1;ONNX_NAMESPACE=onnx_torch;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;HAVE_MALLOC_USABLE_SIZE=1
-- CMAKE_PREFIX_PATH : /usr/lib/python3/dist-packages
-- CMAKE_INSTALL_PREFIX : /root/pytorch/torch
--
-- TORCH_VERSION : 1.2.0
-- CAFFE2_VERSION : 1.2.0
-- BUILD_CAFFE2_MOBILE : ON
-- BUILD_ATEN_ONLY : OFF
-- BUILD_BINARY : OFF
-- BUILD_CUSTOM_PROTOBUF : ON
-- Link local protobuf : ON
-- BUILD_DOCS : OFF
-- BUILD_PYTHON : True
-- Python version : 3.6.8
-- Python executable : /usr/bin/python3
-- Pythonlibs version : 3.6.8
-- Python library : /usr/lib/libpython3.6m.so.1.0
-- Python includes : /usr/include/python3.6m
-- Python site-packages: lib/python3/dist-packages
-- BUILD_CAFFE2_OPS : ON
-- BUILD_SHARED_LIBS : ON
-- BUILD_TEST : True
-- INTERN_BUILD_MOBILE :
-- USE_ASAN : OFF
-- USE_CUDA : False
-- USE_ROCM : ON
-- USE_EIGEN_FOR_BLAS :
-- USE_FBGEMM : ON
-- USE_FFMPEG : OFF
-- USE_GFLAGS : OFF
-- USE_GLOG : OFF
-- USE_LEVELDB : OFF
-- USE_LITE_PROTO : OFF
-- USE_LMDB : 1
-- LMDB version : 0.9.17
-- USE_METAL : OFF
-- USE_MKL : ON
-- USE_MKLDNN : ON
-- USE_MKLDNN_CBLAS : OFF
-- USE_NCCL : OFF
-- USE_NNPACK : ON
-- USE_NUMPY : ON
-- USE_OBSERVERS : ON
-- USE_OPENCL : OFF
-- USE_OPENCV : 1
-- OpenCV version : 2.4.9.1
-- USE_OPENMP : ON
-- USE_TBB : OFF
-- USE_PROF : OFF
-- USE_QNNPACK : ON
-- USE_REDIS : OFF
-- USE_ROCKSDB : OFF
-- USE_ZMQ : OFF
-- USE_DISTRIBUTED : True
-- USE_MPI : OFF
-- USE_GLOO : ON
-- USE_GLOO_IBVERBS : OFF
-- BUILD_NAMEDTENSOR : OFF
-- Public Dependencies : Threads::Threads;caffe2::mkl;caffe2::mkldnn
-- Private Dependencies : qnnpack;nnpack;cpuinfo;fbgemm;/usr/lib/x86_64-linux-gnu/liblmdb.so;/usr/lib/x86_64-linux-gnu/libnuma.so;opencv_core;opencv_highgui;opencv_imgproc;fp16;gloo;aten_op_header_gen;foxi_loader;rt;dl
-- Configuring done
-- Generating done
CMake Warning:
Manually-specified variables were not used by the project:
BUILD_ENVIRONMENT
-- Build files have been written to: /root/pytorch/build
Scanning dependencies of target clog
Scanning dependencies of target pthreadpool
Scanning dependencies of target fbgemm_avx512
[ 0%] Building C object confu-deps/clog/CMakeFiles/clog.dir/src/clog.c.o
[ 0%] Building C object confu-deps/pthreadpool/CMakeFiles/pthreadpool.dir/src/threadpool-pthreads.c.o
[ 0%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_avx512.dir/src/UtilsAvx512.cc.o
Scanning dependencies of target fbgemm_avx2
Scanning dependencies of target gtest
[ 0%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_avx2.dir/src/FbgemmFP16UKernelsAvx2.cc.o
Scanning dependencies of target python_copy_files
[ 0%] Building CXX object third_party/googletest/googlemock/gtest/CMakeFiles/gtest.dir/src/gtest-all.cc.o
Scanning dependencies of target benchmark
Scanning dependencies of target libprotobuf-lite
[ 0%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/arena.cc.o
[ 0%] Building CXX object third_party/benchmark/src/CMakeFiles/benchmark.dir/benchmark.cc.o
[ 0%] Generating __init__.py
Scanning dependencies of target asmjit
[ 0%] Generating contrib/__init__.py
[ 0%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/base/arch.cpp.o
[ 0%] Generating contrib/aten/__init__.py
Scanning dependencies of target fbgemm_generic
[ 0%] Generating contrib/aten/aten_test.py
Scanning dependencies of target gloo
[ 0%] Generating contrib/aten/docs/__init__.py
[ 0%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/algorithm.cc.o
[ 0%] Generating contrib/aten/docs/sample.py
[ 0%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/ExecuteKernel.cc.o
[ 0%] Generating contrib/aten/gen_op.py
[ 0%] Generating contrib/gloo/__init__.py
[ 0%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_avx2.dir/src/FbgemmI8DepthwiseAvx2.cc.o
[ 1%] Generating contrib/gloo/gloo_test.py
[ 1%] Generating contrib/nccl/__init__.py
Scanning dependencies of target libprotobuf
Scanning dependencies of target c10
[ 1%] Generating contrib/nccl/nccl_ops_test.py
[ 1%] Building CXX object c10/CMakeFiles/c10.dir/core/Allocator.cpp.o
[ 1%] Generating contrib/nnpack/__init__.py
[ 1%] Generating contrib/nnpack/nnpack_ops_test.py
[ 1%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/arena.cc.o
[ 1%] Generating contrib/playground/AnyExp.py
[ 1%] Linking C static library ../../lib/libclog.a
[ 1%] Generating contrib/playground/AnyExpOnTerm.py
[ 1%] Generating contrib/playground/ModuleRegister.py
[ 1%] Built target clog
[ 1%] Generating contrib/playground/__init__.py
[ 1%] Generating contrib/playground/checkpoint.py
[ 1%] Building CXX object third_party/benchmark/src/CMakeFiles/benchmark.dir/benchmark_register.cc.o
[ 1%] Generating contrib/playground/compute_loss.py
[ 1%] Generating contrib/playground/compute_topk_accuracy.py
[ 1%] Generating contrib/playground/meter.py
[ 1%] Generating contrib/playground/module_map.py
[ 1%] Generating contrib/playground/output_generator.py
[ 1%] Generating contrib/playground/resnetdemo/IN1k_resnet.py
[ 1%] Generating contrib/playground/resnetdemo/IN1k_resnet_no_test_model.py
[ 1%] Generating contrib/playground/resnetdemo/__init__.py
[ 1%] Generating contrib/playground/resnetdemo/caffe2_resnet50_default_forward.py
[ 1%] Generating contrib/playground/resnetdemo/caffe2_resnet50_default_param_update.py
[ 1%] Generating contrib/playground/resnetdemo/explicit_resnet_forward.py
[ 1%] Generating contrib/playground/resnetdemo/explicit_resnet_param_update.py
[ 1%] Generating contrib/playground/resnetdemo/gfs_IN1k.py
[ 1%] Generating contrib/playground/resnetdemo/override_no_test_model_no_checkpoint.py
[ 1%] Generating contrib/playground/resnetdemo/rendezvous_filestore.py
[ 1%] Generating contrib/prof/__init__.py
[ 1%] Generating contrib/prof/cuda_profile_ops_test.py
[ 1%] Generating contrib/script/__init__.py
[ 1%] Generating contrib/script/examples/__init__.py
[ 1%] Linking C static library ../../lib/libpthreadpool.a
[ 1%] Generating contrib/tensorboard/__init__.py
[ 1%] Generating contrib/tensorboard/tensorboard.py
[ 1%] Built target pthreadpool
[ 2%] Generating contrib/tensorboard/tensorboard_exporter_test.py
[ 2%] Generating contrib/tensorboard/tensorboard_exporter.py
[ 2%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/ExecuteKernelU8S8.cc.o
[ 2%] Generating contrib/tensorboard/tensorboard_test.py
[ 2%] Generating contrib/warpctc/__init__.py
[ 2%] Generating contrib/warpctc/ctc_ops_test.py
[ 2%] Generating core/__init__.py
[ 2%] Generating core/nomnigraph/__init__.py
[ 2%] Generating core/nomnigraph/op_gen.py
[ 2%] Generating distributed/__init__.py
[ 2%] Generating distributed/file_store_handler_op_test.py
[ 2%] Generating distributed/redis_store_handler_op_test.py
[ 2%] Generating distributed/store_ops_test_util.py
[ 2%] Generating experiments/__init__.py
[ 2%] Generating experiments/python/SparseTransformer.py
[ 2%] Generating experiments/python/__init__.py
[ 2%] Generating experiments/python/convnet_benchmarks.py
[ 2%] Generating experiments/python/device_reduce_sum_bench.py
[ 2%] Generating experiments/python/funhash_op_test.py
[ 2%] Generating experiments/python/net_construct_bench.py
[ 2%] Generating experiments/python/sparse_funhash_op_test.py
[ 2%] Generating experiments/python/sparse_reshape_op_test.py
[ 2%] Generating experiments/python/tt_contraction_op_test.py
[ 2%] Generating experiments/python/tt_pad_op_test.py
[ 2%] Generating perfkernels/__init__.py
[ 2%] Generating perfkernels/hp_emblookup_codegen.py
[ 2%] Generating proto/__init__.py
[ 2%] Generating python/__init__.py
[ 2%] Generating python/_import_c_extension.py
[ 2%] Generating python/allcompare_test.py
[ 2%] Generating python/attention.py
[ 2%] Generating python/benchmark_generator.py
[ 2%] Generating python/binarysize.py
[ 3%] Generating python/brew.py
[ 3%] Generating python/brew_test.py
[ 3%] Generating python/build.py
[ 3%] Generating python/cached_reader.py
[ 3%] Generating python/caffe_translator.py
Scanning dependencies of target mkldnn
[ 3%] Generating python/caffe_translator_test.py
[ 3%] Generating python/checkpoint.py
[ 3%] Generating python/checkpoint_test.py
[ 3%] Generating python/cnn.py
[ 3%] Generating python/compatibility.py
[ 3%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/batch_normalization.cpp.o
[ 3%] Generating python/context.py
[ 3%] Generating python/context_test.py
[ 3%] Generating python/control.py
[ 3%] Generating python/control_ops_grad.py
[ 3%] Generating python/control_ops_grad_test.py
[ 3%] Generating python/control_ops_util.py
[ 3%] Generating python/control_test.py
[ 3%] Generating python/convert.py
[ 3%] Built target fbgemm_avx512
[ 3%] Generating python/convert_test.py
[ 3%] Generating python/convnet_benchmarks.py
[ 3%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/convolution.cpp.o
[ 3%] Generating python/convnet_benchmarks_test.py
[ 3%] Generating python/core.py
[ 3%] Generating python/core_gradients_test.py
[ 3%] Generating python/core_test.py
[ 3%] Generating python/crf.py
[ 3%] Generating python/crf_predict.py
[ 3%] Generating python/crf_viterbi_test.py
[ 3%] Generating python/data_parallel_model.py
[ 3%] Generating python/data_parallel_model_test.py
[ 3%] Generating python/data_workers.py
[ 3%] Generating python/data_workers_test.py
[ 4%] Generating python/dataio.py
[ 4%] Generating python/dataio_test.py
[ 4%] Generating python/dataset.py
[ 4%] Generating python/db_file_reader.py
[ 4%] Generating python/db_test.py
[ 4%] Generating python/device_checker.py
[ 4%] Generating python/docs/__init__.py
[ 4%] Generating python/docs/formatter.py
[ 4%] Generating python/docs/generator.py
[ 4%] Generating python/docs/github.py
[ 4%] Generating python/docs/parser.py
[ 4%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/allgather.cc.o
[ 4%] Generating python/dyndep.py
[ 4%] Generating python/embedding_generation_benchmark.py
[ 4%] Generating python/examples/__init__.py
[ 4%] Generating python/examples/char_rnn.py
[ 4%] Building CXX object c10/CMakeFiles/c10.dir/core/CPUAllocator.cpp.o
[ 4%] Generating python/examples/imagenet_trainer.py
[ 4%] Generating python/examples/lmdb_create_example.py
[ 4%] Generating python/examples/resnet50_trainer.py
[ 4%] Generating python/experiment_util.py
[ 4%] Generating python/extension_loader.py
[ 4%] Generating python/filler_test.py
[ 4%] Generating python/functional.py
[ 4%] Generating python/functional_test.py
[ 4%] Generating python/fused_8bit_rowwise_conversion_ops_test.py
[ 4%] Generating python/gradient_check_test.py
[ 4%] Generating python/gradient_checker.py
[ 4%] Generating python/gru_cell.py
[ 4%] Generating python/helpers/__init__.py
[ 4%] Generating python/helpers/algebra.py
[ 4%] Generating python/helpers/arg_scope.py
[ 4%] Generating python/helpers/array_helpers.py
[ 4%] Generating python/helpers/control_ops.py
[ 5%] Generating python/helpers/conv.py
[ 5%] Generating python/helpers/db_input.py
[ 5%] Generating python/helpers/dropout.py
[ 5%] Generating python/helpers/elementwise_linear.py
[ 5%] Generating python/helpers/fc.py
[ 5%] Generating python/helpers/nonlinearity.py
[ 5%] Generating python/helpers/normalization.py
[ 5%] Generating python/helpers/pooling.py
[ 5%] Generating python/helpers/tools.py
[ 5%] Generating python/helpers/train.py
[ 5%] Generating python/hip_test_util.py
[ 5%] Generating python/hsm_util.py
[ 5%] Generating python/hypothesis_test.py
[ 5%] Generating python/hypothesis_test_util.py
[ 5%] Generating python/ideep/LRN_op_test.py
[ 5%] Generating python/ideep/__init__.py
[ 5%] Generating python/ideep/adam_op_test.py
[ 5%] Generating python/ideep/blobs_queue_db_test.py
[ 5%] Generating python/ideep/channel_shuffle_op_test.py
[ 5%] Generating python/ideep/concat_split_op_test.py
[ 5%] Generating python/ideep/conv_op_test.py
[ 5%] Generating python/ideep/conv_transpose_test.py
[ 5%] Generating python/ideep/convfusion_op_test.py
[ 5%] Generating python/ideep/copy_op_test.py
[ 5%] Generating python/ideep/dropout_op_test.py
[ 5%] Generating python/ideep/elementwise_sum_op_test.py
[ 5%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/convolution_pd.cpp.o
[ 5%] Generating python/ideep/expanddims_squeeze_op_test.py
[ 5%] Generating python/ideep/fc_op_test.py
[ 5%] Generating python/ideep/leaky_relu_op_test.py
[ 5%] Generating python/ideep/moment_sgd_op_test.py
[ 5%] Generating python/ideep/operator_fallback_op_test.py
[ 6%] Generating python/ideep/order_switch_op_test.py
[ 6%] Generating python/ideep/pool_op_test.py
[ 6%] Generating python/ideep/pre_convert_test.py
[ 6%] Generating python/ideep/relu_op_test.py
[ 6%] Generating python/ideep/reshape_op_test.py
[ 6%] Generating python/ideep/shape_op_test.py
[ 6%] Generating python/ideep/sigmoid_op_test.py
[ 6%] Generating python/ideep/softmax_op_test.py
[ 6%] Generating python/ideep/spatial_bn_op_test.py
[ 6%] Generating python/ideep/test_ideep_net.py
[ 6%] Generating python/ideep/transform_ideep_net.py
[ 6%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/allgatherv.cc.o
[ 6%] Generating python/ideep/transpose_op_test.py
[ 6%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/base/assembler.cpp.o
[ 6%] Generating python/ideep/weightedsum_op_test.py
[ 6%] Generating python/ideep_test_util.py
[ 6%] Generating python/layer_model_helper.py
[ 6%] Generating python/layer_model_instantiator.py
[ 6%] Generating python/layer_parameter_sharing_test.py
[ 6%] Generating python/layer_test_util.py
[ 6%] Generating python/layers/__init__.py
[ 6%] Generating python/layers/adaptive_weight.py
[ 6%] Generating python/layers/add_bias.py
[ 6%] Generating python/layers/arc_cosine_feature_map.py
[ 6%] Generating python/layers/batch_huber_loss.py
[ 6%] Generating python/layers/batch_lr_loss.py
[ 6%] Generating python/layers/batch_mse_loss.py
[ 6%] Generating python/layers/batch_normalization.py
[ 6%] Generating python/layers/batch_sigmoid_cross_entropy_loss.py
[ 6%] Generating python/layers/batch_softmax_loss.py
[ 6%] Generating python/layers/blob_weighted_sum.py
[ 6%] Generating python/layers/bucket_weighted.py
[ 6%] Generating python/layers/build_index.py
[ 6%] Generating python/layers/concat.py
[ 7%] Generating python/layers/constant_weight.py
[ 7%] Generating python/layers/conv.py
[ 7%] Generating python/layers/dropout.py
[ 7%] Generating python/layers/fc.py
[ 7%] Generating python/layers/fc_without_bias.py
[ 7%] Generating python/layers/feature_sparse_to_dense.py
[ 7%] Generating python/layers/functional.py
[ 7%] Generating python/layers/gather_record.py
[ 7%] Generating python/layers/homotopy_weight.py
[ 7%] Generating python/layers/label_smooth.py
[ 7%] Generating python/layers/last_n_window_collector.py
[ 7%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/arenastring.cc.o
[ 7%] Generating python/layers/layer_normalization.py
[ 7%] Generating python/layers/layers.py
[ 7%] Generating python/layers/margin_rank_loss.py
[ 7%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/arenastring.cc.o
[ 7%] Generating python/layers/merge_id_lists.py
[ 7%] Generating python/layers/pairwise_similarity.py
[ 7%] Generating python/layers/position_weighted.py
[ 7%] Generating python/layers/random_fourier_features.py
[ 7%] Generating python/layers/reservoir_sampling.py
[ 7%] Generating python/layers/sampling_train.py
[ 7%] Generating python/layers/sampling_trainable_mixin.py
[ 7%] Generating python/layers/select_record_by_context.py
[ 7%] Generating python/layers/semi_random_features.py
[ 7%] Generating python/layers/sparse_dropout_with_replacement.py
[ 7%] Generating python/layers/sparse_feature_hash.py
[ 7%] Generating python/layers/sparse_lookup.py
[ 7%] Generating python/layers/split.py
[ 7%] Generating python/layers/tags.py
[ 7%] Generating python/layers/uniform_sampling.py
[ 7%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/deconvolution.cpp.o
[ 7%] Generating python/layers_test.py
[ 7%] Generating python/lengths_reducer_fused_8bit_rowwise_ops_test.py
[ 8%] Generating python/lengths_reducer_rowwise_8bit_ops_test.py
[ 8%] Generating python/lstm_benchmark.py
[ 8%] Generating python/memonger.py
[ 8%] Generating python/memonger_test.py
[ 8%] Generating python/mint/__init__.py
[ 8%] Generating python/mint/app.py
[ 8%] Building CXX object third_party/benchmark/src/CMakeFiles/benchmark.dir/colorprint.cc.o
[ 8%] Generating python/mkl/__init__.py
[ 8%] Generating python/mkl/mkl_LRN_op_test.py
[ 8%] Generating python/mkl/mkl_LRN_speed_test.py
[ 8%] Generating python/mkl/mkl_concat_op_test.py
[ 8%] Generating python/mkl/mkl_conv_op_test.py
[ 8%] Generating python/mkl/mkl_copy_op_test.py
[ 8%] Generating python/mkl/mkl_elementwise_add_op_test.py
[ 8%] Generating python/mkl/mkl_elementwise_sum_op_test.py
[ 8%] Generating python/mkl/mkl_fc_op_test.py
[ 8%] Generating python/mkl/mkl_fc_speed_test.py
[ 8%] Generating python/mkl/mkl_fill_op_test.py
[ 8%] Generating python/mkl/mkl_pool_op_test.py
[ 8%] Generating python/mkl/mkl_pool_speed_test.py
[ 8%] Generating python/mkl/mkl_relu_op_test.py
[ 8%] Generating python/mkl/mkl_sbn_op_test.py
[ 8%] Generating python/mkl/mkl_sbn_speed_test.py
[ 8%] Generating python/mkl/mkl_sigmoid_op_test.py
[ 8%] Generating python/mkl/mkl_speed_test.py
[ 8%] Generating python/mkl/mkl_squeeze_op_test.py
[ 8%] Generating python/mkl/rewrite_graph.py
[ 8%] Generating python/mkl/rewrite_graph_test.py
[ 8%] Generating python/mkl_test_util.py
[ 8%] Generating python/model_device_test.py
[ 8%] Generating python/model_helper.py
[ 8%] Generating python/model_helper_test.py
[ 9%] Generating python/modeling/__init__.py
[ 9%] Generating python/modeling/compute_histogram_for_blobs.py
[ 9%] Generating python/modeling/compute_histogram_for_blobs_test.py
[ 9%] Generating python/modeling/compute_norm_for_blobs.py
[ 9%] Generating python/modeling/compute_norm_for_blobs_test.py
[ 9%] Generating python/modeling/compute_statistics_for_blobs.py
[ 9%] Generating python/modeling/compute_statistics_for_blobs_test.py
[ 9%] Generating python/modeling/get_entry_from_blobs.py
[ 9%] Generating python/modeling/get_entry_from_blobs_test.py
[ 9%] Generating python/modeling/gradient_clipping.py
[ 9%] Generating python/modeling/gradient_clipping_test.py
[ 9%] Generating python/modeling/initializers.py
[ 9%] Generating python/modeling/initializers_test.py
[ 9%] Generating python/modeling/net_modifier.py
[ 9%] Generating python/modeling/parameter_info.py
[ 9%] Generating python/modeling/parameter_sharing.py
[ 9%] Generating python/modeling/parameter_sharing_test.py
[ 9%] Generating python/models/__init__.py
[ 9%] Generating python/models/__sym_init__.py
[ 9%] Generating python/models/download.py
[ 9%] Generating python/models/imagenet_trainer_test_utils.py
[ 9%] Generating python/models/resnet.py
[ 9%] Generating python/models/resnet_test.py
[ 9%] Generating python/models/seq2seq/__init__.py
[ 9%] Generating python/models/seq2seq/beam_search.py
[ 9%] Generating python/models/seq2seq/seq2seq_beam_search_test.py
[ 9%] Generating python/models/seq2seq/seq2seq_model_helper.py
[ 9%] Generating python/models/seq2seq/seq2seq_model_helper_test.py
[ 9%] Generating python/models/seq2seq/seq2seq_util.py
[ 9%] Generating python/models/seq2seq/train.py
[ 9%] Generating python/models/seq2seq/translate.py
[ 9%] Generating python/models/shufflenet.py
[ 10%] Generating python/models/shufflenet_test.py
[ 10%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/allreduce.cc.o
[ 10%] Generating python/modifier_context.py
[ 10%] Generating python/muji.py
[ 10%] Generating python/muji_test.py
[ 10%] Generating python/net_builder.py
[ 10%] Generating python/net_builder_test.py
[ 10%] Generating python/net_drawer.py
[ 10%] Generating python/net_printer.py
[ 10%] Generating python/net_printer_test.py
[ 10%] Generating python/nomnigraph.py
[ 10%] Generating python/nomnigraph_test.py
[ 10%] Generating python/nomnigraph_transformations.py
[ 10%] Generating python/nomnigraph_transformations_test.py
[ 10%] Generating python/normalizer.py
[ 10%] Generating python/normalizer_context.py
[ 10%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/eltwise.cpp.o
[ 10%] Generating python/normalizer_test.py
[ 10%] Generating python/numa_benchmark.py
[ 10%] Generating python/numa_test.py
[ 10%] Generating python/observer_test.py
[ 10%] Generating python/onnx/__init__.py
[ 10%] Generating python/onnx/backend.py
[ 10%] Generating python/onnx/backend_cpp_rep.py
[ 10%] Generating python/onnx/backend_rep.py
[ 10%] Generating python/onnx/bin/__init__.py
[ 10%] Generating python/onnx/bin/conversion.py
[ 10%] Generating python/onnx/error.py
[ 10%] Generating python/onnx/frontend.py
[ 10%] Generating python/onnx/helper.py
[ 10%] Generating python/onnx/onnxifi.py
[ 10%] Generating python/onnx/test_onnxifi.py
[ 10%] Generating python/onnx/tests/__init__.py
[ 11%] Generating python/onnx/tests/c2_ref_test.py
[ 11%] Generating python/onnx/tests/conversion_test.py
[ 11%] Generating python/onnx/tests/helper_test.py
[ 11%] Generating python/onnx/tests/onnx_backend_test.py
[ 11%] Generating python/onnx/tests/ssa_test.py
[ 11%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/extension_set.cc.o
[ 11%] Generating python/onnx/tests/test_utils.py
[ 11%] Generating python/onnx/workspace.py
[ 11%] Generating python/operator_fp_exceptions_test.py
[ 11%] Generating python/operator_test/__init__.py
[ 11%] Generating python/operator_test/activation_ops_test.py
[ 11%] Generating python/operator_test/adadelta_test.py
[ 11%] Generating python/operator_test/adagrad_test.py
[ 11%] Generating python/operator_test/adagrad_test_helper.py
[ 11%] Generating python/operator_test/adam_test.py
[ 11%] Generating python/operator_test/affine_channel_op_test.py
[ 11%] Generating python/operator_test/apmeter_test.py
[ 11%] Generating python/operator_test/arg_ops_test.py
[ 11%] Generating python/operator_test/assert_test.py
[ 11%] Generating python/operator_test/atomic_ops_test.py
[ 11%] Generating python/operator_test/basic_rnn_test.py
[ 11%] Generating python/operator_test/batch_box_cox_test.py
[ 11%] Generating python/operator_test/batch_bucketize_op_test.py
[ 11%] Generating python/operator_test/batch_moments_op_test.py
[ 11%] Generating python/operator_test/batch_sparse_to_dense_op_test.py
[ 11%] Generating python/operator_test/bbox_transform_test.py
[ 11%] Generating python/operator_test/bisect_percentile_op_test.py
[ 11%] Generating python/operator_test/blobs_queue_db_test.py
[ 11%] Generating python/operator_test/boolean_mask_test.py
[ 11%] Generating python/operator_test/boolean_unmask_test.py
[ 11%] Generating python/operator_test/box_with_nms_limit_op_test.py
[ 11%] Generating python/operator_test/bucketize_op_test.py
[ 12%] Generating python/operator_test/cast_op_test.py
[ 12%] Generating python/operator_test/ceil_op_test.py
[ 12%] Generating python/operator_test/channel_backprop_stats_op_test.py
[ 12%] Generating python/operator_test/channel_shuffle_test.py
[ 12%] Generating python/operator_test/channel_stats_op_test.py
[ 12%] Generating python/operator_test/checkpoint_test.py
[ 12%] Generating python/operator_test/clip_op_test.py
[ 12%] Generating python/operator_test/clip_tensor_op_test.py
[ 12%] Generating python/operator_test/collect_and_distribute_fpn_rpn_proposals_op_test.py
[ 12%] Generating python/operator_test/concat_split_op_test.py
[ 12%] Generating python/operator_test/conditional_test.py
[ 12%] Generating python/operator_test/conftest.py
[ 12%] Generating python/operator_test/conv_test.py
[ 12%] Generating python/operator_test/conv_transpose_test.py
[ 12%] Generating python/operator_test/copy_ops_test.py
[ 12%] Generating python/operator_test/copy_rows_to_tensor_op_test.py
[ 12%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/base/codebuilder.cpp.o
[ 12%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/extension_set.cc.o
[ 12%] Generating python/operator_test/cosine_embedding_criterion_op_test.py
[ 12%] Generating python/operator_test/counter_ops_test.py
[ 12%] Generating python/operator_test/crf_test.py
[ 12%] Generating python/operator_test/cross_entropy_ops_test.py
[ 12%] Generating python/operator_test/ctc_beam_search_decoder_op_test.py
[ 12%] Generating python/operator_test/ctc_greedy_decoder_op_test.py
[ 12%] Generating python/operator_test/cudnn_recurrent_test.py
[ 12%] Generating python/operator_test/data_couple_op_test.py
[ 12%] Generating python/operator_test/dataset_ops_test.py
[ 12%] Generating python/operator_test/deform_conv_test.py
[ 12%] Generating python/operator_test/dense_vector_to_id_list_op_test.py
[ 12%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/engine.cpp.o
[ 12%] Generating python/operator_test/depthwise_3x3_conv_test.py
[ 12%] Generating python/operator_test/detectron_keypoints.py
[ 12%] Generating python/operator_test/distance_op_test.py
[ 12%] Generating python/operator_test/dropout_op_test.py
[ 12%] Generating python/operator_test/duplicate_operands_test.py
[ 13%] Generating python/operator_test/elementwise_linear_op_test.py
[ 13%] Generating python/operator_test/elementwise_logical_ops_test.py
[ 13%] Generating python/operator_test/elementwise_op_broadcast_test.py
[ 13%] Generating python/operator_test/elementwise_ops_test.py
[ 13%] Generating python/operator_test/emptysample_ops_test.py
[ 13%] Generating python/operator_test/enforce_finite_op_test.py
[ 13%] Generating python/operator_test/ensure_clipped_test.py
[ 13%] Generating python/operator_test/ensure_cpu_output_op_test.py
[ 13%] Generating python/operator_test/erf_op_test.py
[ 13%] Generating python/operator_test/expand_op_test.py
[ 13%] Generating python/operator_test/fc_operator_test.py
[ 13%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_avx2.dir/src/OptimizedKernelsAvx2.cc.o
[ 13%] Generating python/operator_test/feature_maps_ops_test.py
[ 13%] Generating python/operator_test/filler_ops_test.py
[ 13%] Generating python/operator_test/find_op_test.py
[ 13%] Generating python/operator_test/flatten_op_test.py
[ 13%] Generating python/operator_test/flexible_top_k_test.py
[ 13%] Generating python/operator_test/floor_op_test.py
[ 13%] Generating python/operator_test/gather_ops_test.py
[ 13%] Generating python/operator_test/gather_ranges_op_test.py
[ 13%] Generating python/operator_test/given_tensor_byte_string_to_uint8_fill_op_test.py
[ 13%] Generating python/operator_test/given_tensor_fill_op_test.py
[ 13%] Generating python/operator_test/glu_op_test.py
[ 13%] Generating python/operator_test/group_conv_test.py
[ 13%] Generating python/operator_test/group_norm_op_test.py
[ 13%] Generating python/operator_test/gru_test.py
[ 13%] Generating python/operator_test/heatmap_max_keypoint_op_test.py
[ 13%] Generating python/operator_test/hsm_test.py
[ 13%] Generating python/operator_test/hyperbolic_ops_test.py
[ 13%] Generating python/operator_test/im2col_col2im_test.py
[ 13%] Generating python/operator_test/image_input_op_test.py
[ 13%] Generating python/operator_test/index_hash_ops_test.py
[ 14%] Generating python/operator_test/index_ops_test.py
[ 14%] Generating python/operator_test/instance_norm_test.py
[ 14%] Generating python/operator_test/integral_image_ops_test.py
[ 14%] Generating python/operator_test/jsd_ops_test.py
[ 14%] Generating python/operator_test/key_split_ops_test.py
[ 14%] Generating python/operator_test/lars_test.py
[ 14%] Generating python/operator_test/layer_norm_op_test.py
[ 14%] Generating python/operator_test/leaky_relu_test.py
[ 14%] Generating python/operator_test/learning_rate_adaption_op_test.py
[ 14%] Generating python/operator_test/learning_rate_op_test.py
[ 14%] Generating python/operator_test/length_split_op_test.py
[ 14%] Generating python/operator_test/lengths_pad_op_test.py
[ 14%] Generating python/operator_test/lengths_tile_op_test.py
[ 14%] Generating python/operator_test/lengths_top_k_ops_test.py
[ 14%] Generating python/operator_test/listwise_l2r_operator_test.py
[ 14%] Generating python/operator_test/load_save_test.py
[ 14%] Generating python/operator_test/locally_connected_op_test.py
[ 14%] Generating python/operator_test/loss_ops_test.py
[ 14%] Generating python/operator_test/lpnorm_op_test.py
[ 14%] Generating python/operator_test/map_ops_test.py
[ 14%] Generating python/operator_test/margin_ranking_criterion_op_test.py
[ 14%] Generating python/operator_test/math_ops_test.py
[ 14%] Generating python/operator_test/matmul_op_test.py
[ 14%] Generating python/operator_test/mean_op_test.py
[ 14%] Generating python/operator_test/merge_id_lists_op_test.py
[ 14%] Generating python/operator_test/mkl_conv_op_test.py
[ 14%] Generating python/operator_test/mkl_packed_fc_op_test.py
[ 14%] Generating python/operator_test/mkl_speed_test.py
[ 14%] Generating python/operator_test/mod_op_test.py
[ 14%] Generating python/operator_test/moments_op_test.py
[ 14%] Generating python/operator_test/momentum_sgd_test.py
[ 14%] Generating python/operator_test/mpi_test.py
[ 15%] Generating python/operator_test/negate_gradient_op_test.py
[ 15%] Generating python/operator_test/ngram_ops_test.py
[ 15%] Generating python/operator_test/normalize_op_test.py
[ 15%] Generating python/operator_test/numpy_tile_op_test.py
Scanning dependencies of target foxi_loader
[ 15%] Generating python/operator_test/one_hot_ops_test.py
[ 15%] Building C object third_party/foxi/CMakeFiles/foxi_loader.dir/foxi/onnxifi_loader.c.o
[ 15%] Generating python/operator_test/onnx_while_test.py
[ 15%] Generating python/operator_test/order_switch_test.py
[ 15%] Generating python/operator_test/pack_ops_test.py
[ 15%] Generating python/operator_test/pack_rnn_sequence_op_test.py
[ 15%] Generating python/operator_test/pad_test.py
[ 15%] Generating python/operator_test/partition_ops_test.py
[ 15%] Generating python/operator_test/percentile_op_test.py
[ 15%] Generating python/operator_test/piecewise_linear_transform_test.py
[ 15%] Generating python/operator_test/pooling_test.py
[ 15%] Generating python/operator_test/prepend_dim_test.py
[ 15%] Generating python/operator_test/python_op_test.py
[ 15%] Linking C static library ../../lib/libfoxi_loader.a
[ 15%] Generating python/operator_test/rand_quantization_op_speed_test.py
[ 15%] Generating python/operator_test/rand_quantization_op_test.py
[ 15%] Built target foxi_loader
[ 15%] Generating python/operator_test/rank_loss_operator_test.py
[ 15%] Generating python/operator_test/rebatching_queue_test.py
[ 15%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/inner_product.cpp.o
[ 15%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/lrn.cpp.o
[ 15%] Generating python/operator_test/record_queue_test.py
[ 15%] Generating python/operator_test/recurrent_net_executor_test.py
[ 15%] Generating python/operator_test/recurrent_network_test.py
[ 15%] Generating python/operator_test/reduce_ops_test.py
[ 15%] Generating python/operator_test/reduction_ops_test.py
[ 15%] Generating python/operator_test/reshape_ops_test.py
[ 15%] Generating python/operator_test/resize_op_test.py
[ 15%] Generating python/operator_test/rmac_regions_op_test.py
[ 15%] Generating python/operator_test/rnn_cell_test.py
[ 15%] Generating python/operator_test/roi_align_rotated_op_test.py
[ 15%] Generating python/operator_test/scale_op_test.py
[ 16%] Generating python/operator_test/segment_ops_test.py
[ 16%] Generating python/operator_test/selu_op_test.py
[ 16%] Generating python/operator_test/sequence_ops_test.py
[ 16%] Generating python/operator_test/shape_inference_test.py
[ 16%] Building CXX object c10/CMakeFiles/c10.dir/core/CopyBytes.cpp.o
[ 16%] Generating python/operator_test/sinusoid_position_encoding_op_test.py
[ 16%] Generating python/operator_test/softmax_ops_test.py
[ 16%] Generating python/operator_test/softplus_op_test.py
[ 16%] Generating python/operator_test/sparse_dropout_with_replacement_op_test.py
[ 16%] Generating python/operator_test/sparse_gradient_checker_test.py
[ 16%] Generating python/operator_test/sparse_lengths_sum_benchmark.py
[ 16%] Generating python/operator_test/sparse_normalize_test.py
[ 16%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/generated_message_table_driven_lite.cc.o
[ 16%] Generating python/operator_test/sparse_ops_test.py
[ 16%] Generating python/operator_test/sparse_to_dense_mask_op_test.py
[ 16%] Generating python/operator_test/spatial_bn_op_test.py
[ 16%] Generating python/operator_test/specialized_segment_ops_test.py
[ 16%] Generating python/operator_test/square_root_divide_op_test.py
[ 16%] Generating python/operator_test/stats_ops_test.py
[ 16%] Generating python/operator_test/stats_put_ops_test.py
[ 16%] Generating python/operator_test/string_ops_test.py
[ 16%] Generating python/operator_test/text_file_reader_test.py
[ 16%] Generating python/operator_test/thresholded_relu_op_test.py
[ 16%] Generating python/operator_test/tile_op_test.py
[ 16%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/allreduce_local.cc.o
[ 16%] Generating python/operator_test/top_k_test.py
[ 16%] Generating python/operator_test/torch_integration_test.py
[ 16%] Generating python/operator_test/transpose_op_test.py
[ 16%] Generating python/operator_test/trigonometric_op_test.py
[ 16%] Generating python/operator_test/unique_ops_test.py
[ 16%] Generating python/operator_test/unique_uniform_fill_op_test.py
[ 16%] Generating python/operator_test/upsample_op_test.py
[ 16%] Generating python/operator_test/utility_ops_test.py
[ 16%] Generating python/operator_test/video_input_op_test.py
[ 17%] Generating python/operator_test/weighted_multi_sample_test.py
[ 17%] Generating python/operator_test/weighted_sample_test.py
[ 17%] Generating python/operator_test/weighted_sum_test.py
[ 17%] Generating python/operator_test/wngrad_test.py
[ 17%] Generating python/optimizer.py
[ 17%] Generating python/optimizer_context.py
[ 17%] Generating python/optimizer_test.py
[ 17%] Generating python/optimizer_test_util.py
[ 17%] Generating python/parallel_workers.py
[ 17%] Generating python/parallel_workers_test.py
[ 17%] Generating python/parallelize_bmuf_distributed_test.py
[ 17%] Generating python/pipeline.py
[ 17%] Generating python/pipeline_test.py
[ 17%] Generating python/predictor/__init__.py
[ 17%] Generating python/predictor/mobile_exporter.py
[ 17%] Generating python/predictor/mobile_exporter_test.py
[ 17%] Generating python/predictor/predictor_exporter.py
[ 17%] Generating python/predictor/predictor_exporter_test.py
[ 17%] Generating python/predictor/predictor_py_utils.py
[ 17%] Generating python/predictor/predictor_test.py
[ 17%] Generating python/predictor/serde.py
[ 17%] Generating python/predictor_constants.py
[ 17%] Generating python/python_op_test.py
[ 17%] Generating python/queue_util.py
[ 17%] Generating python/record_queue.py
[ 17%] Generating python/recurrent.py
[ 17%] Generating python/regularizer.py
[ 17%] Generating python/regularizer_context.py
[ 17%] Generating python/regularizer_test.py
[ 17%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/memory.cpp.o
[ 17%] Generating python/rnn/__init__.py
[ 17%] Generating python/rnn/lstm_comparison.py
[ 17%] Generating python/rnn/rnn_cell_test_util.py
[ 18%] Generating python/rnn_cell.py
[ 18%] Generating python/schema.py
[ 18%] Generating python/schema_test.py
[ 18%] Building CXX object c10/CMakeFiles/c10.dir/core/DefaultDtype.cpp.o
[ 18%] Generating python/scope.py
[ 18%] Generating python/scope_test.py
[ 18%] Generating python/serialized_test/__init__.py
[ 18%] Generating python/serialized_test/coverage.py
[ 18%] Generating python/serialized_test/serialized_test_util.py
[ 18%] Generating python/session.py
[ 18%] Generating python/session_test.py
[ 18%] Generating python/sparse_to_dense_mask_test.py
[ 18%] Generating python/sparse_to_dense_test.py
[ 18%] Generating python/task.py
[ 18%] Generating python/task_test.py
[ 18%] Generating python/test/__init__.py
[ 18%] Generating python/test/blob_deallocation_test.py
[ 18%] Generating python/test/do_op_test.py
[ 18%] Generating python/test/executor_test.py
[ 18%] Generating python/test/executor_test_util.py
[ 18%] Generating python/test/inference_lstm_op_test.py
[ 18%] Generating python/test/python_protobuf_test.py
[ 18%] Generating python/test_util.py
[ 18%] Generating python/text_file_reader.py
[ 18%] Generating python/timeout_guard.py
[ 18%] Generating python/toy_regression_test.py
[ 18%] Generating python/transformations.py
[ 18%] Generating python/transformations_test.py
[ 18%] Generating python/trt/__init__.py
[ 18%] Generating python/trt/test_trt.py
[ 18%] Generating python/trt/transform.py
[ 18%] Generating python/tt_core.py
[ 19%] Generating python/tt_core_test.py
[ 19%] Generating python/utils.py
[ 19%] Generating python/utils_test.py
[ 19%] Generating python/visualize.py
[ 19%] Generating python/workspace.py
[ 19%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/base/codecompiler.cpp.o
[ 19%] Generating python/workspace_test.py
[ 19%] Generating quantization/__init__.py
[ 19%] Generating quantization/server/__init__.py
[ 19%] Generating quantization/server/batch_matmul_dnnlowp_op_test.py
[ 19%] Generating quantization/server/batch_permutation_dnnlowp_op_test.py
[ 19%] Generating quantization/server/channel_shuffle_dnnlowp_op_test.py
[ 19%] Generating quantization/server/concat_dnnlowp_op_test.py
[ 19%] Generating quantization/server/conv_depthwise_dnnlowp_op_test.py
[ 19%] Generating quantization/server/conv_dnnlowp_acc16_op_test.py
[ 19%] Generating quantization/server/conv_dnnlowp_op_test.py
[ 19%] Generating quantization/server/conv_groupwise_dnnlowp_acc16_op_test.py
[ 19%] Generating quantization/server/conv_groupwise_dnnlowp_op_test.py
[ 19%] Generating quantization/server/dequantize_dnnlowp_op_test.py
[ 19%] Generating quantization/server/dnnlowp_test_utils.py
[ 19%] Generating quantization/server/elementwise_add_dnnlowp_op_test.py
[ 19%] Generating quantization/server/elementwise_linear_dnnlowp_op_test.py
[ 19%] Generating quantization/server/elementwise_mul_dnnlowp_op_test.py
[ 19%] Generating quantization/server/elementwise_sum_dnnlowp_op_test.py
[ 19%] Generating quantization/server/fully_connected_dnnlowp_acc16_op_test.py
[ 19%] Generating quantization/server/fully_connected_dnnlowp_op_test.py
[ 19%] Generating quantization/server/fully_connected_fp16_test.py
[ 19%] Generating quantization/server/fully_connected_rowwise_dnnlowp_op_test.py
[ 19%] Generating quantization/server/gather_dnnlowp_op_test.py
[ 19%] Generating quantization/server/group_norm_dnnlowp_op_test.py
[ 19%] Generating quantization/server/lstm_unit_dnnlowp_op_test.py
[ 19%] Generating quantization/server/observer_test.py
[ 20%] Generating quantization/server/pool_dnnlowp_op_test.py
[ 20%] Generating quantization/server/quantize_dnnlowp_op_test.py
[ 20%] Generating quantization/server/relu_dnnlowp_op_test.py
[ 20%] Generating quantization/server/resize_nearest_dnnlowp_op_test.py
[ 20%] Generating quantization/server/sigmoid_dnnlowp_op_test.py
[ 20%] Generating quantization/server/spatial_batch_norm_dnnlowp_op_test.py
[ 20%] Generating quantization/server/tanh_dnnlowp_op_test.py
[ 20%] Generating quantization/server/utils.py
[ 20%] Built target python_copy_files
[ 20%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/base/codeemitter.cpp.o
[ 20%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/base/codeholder.cpp.o
[ 20%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/base/constpool.cpp.o
[ 20%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/barrier.cc.o
[ 20%] Building CXX object c10/CMakeFiles/c10.dir/core/Device.cpp.o
[ 20%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/memory_desc_wrapper.cpp.o
[ 20%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/broadcast.cc.o
[ 20%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/context.cc.o
[ 20%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/gather.cc.o
[ 20%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/reduce.cc.o
[ 20%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/base/cpuinfo.cpp.o
[ 20%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/mkldnn_debug.cpp.o
[ 20%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/pooling.cpp.o
[ 21%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/primitive.cpp.o
[ 21%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/primitive_attr.cpp.o
[ 21%] Building CXX object c10/CMakeFiles/c10.dir/core/DeviceType.cpp.o
[ 21%] Building CXX object c10/CMakeFiles/c10.dir/core/Scalar.cpp.o
[ 21%] Building CXX object c10/CMakeFiles/c10.dir/core/Storage.cpp.o
[ 21%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/generated_message_util.cc.o
[ 21%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/implicit_weak_message.cc.o
[ 21%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/io/coded_stream.cc.o
[ 21%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/primitive_desc.cpp.o
[ 21%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/primitive_iterator.cpp.o
[ 21%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/generated_message_table_driven_lite.cc.o
[ 21%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/base/func.cpp.o
[ 22%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/scatter.cc.o
[ 22%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/types.cc.o
[ 22%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/query.cpp.o
[ 22%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/reorder.cpp.o
[ 22%] Building CXX object third_party/benchmark/src/CMakeFiles/benchmark.dir/commandlineflags.cc.o
[ 22%] Building CXX object c10/CMakeFiles/c10.dir/core/StorageImpl.cpp.o
[ 22%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/common/linux.cc.o
[ 22%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/common/logging.cc.o
[ 22%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/rendezvous/context.cc.o
[ 22%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/Fbgemm.cc.o
[ 22%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/rnn.cpp.o
[ 22%] Building CXX object third_party/benchmark/src/CMakeFiles/benchmark.dir/complexity.cc.o
[ 22%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/scratchpad.cpp.o
[ 22%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/shuffle.cpp.o
[ 22%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/FbgemmFP16.cc.o
[ 22%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/base/globals.cpp.o
[ 22%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/base/inst.cpp.o
[ 22%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/softmax.cpp.o
[ 22%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/stream.cpp.o
[ 22%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/rendezvous/file_store.cc.o
[ 22%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/rendezvous/hash_store.cc.o
[ 22%] Building CXX object c10/CMakeFiles/c10.dir/core/Stream.cpp.o
[ 22%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/rendezvous/prefix_store.cc.o
[ 22%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/rendezvous/store.cc.o
[ 22%] Building CXX object c10/CMakeFiles/c10.dir/core/TensorImpl.cpp.o
[ 22%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/generated_message_util.cc.o
[ 23%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/implicit_weak_message.cc.o
[ 23%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/base/logging.cpp.o
[ 23%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/io/zero_copy_stream.cc.o
[ 23%] Building CXX object third_party/benchmark/src/CMakeFiles/benchmark.dir/console_reporter.cc.o
[ 23%] Building CXX object third_party/benchmark/src/CMakeFiles/benchmark.dir/counter.cc.o
[ 23%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/utils.cpp.o
[ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/address.cc.o
[ 23%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/io/zero_copy_stream_impl_lite.cc.o
[ 23%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/common/verbose.cpp.o
[ 23%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/cpu_barrier.cpp.o
[ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/buffer.cc.o
[ 23%] Linking CXX static library ../../../../lib/libgtest.a
[ 23%] Built target gtest
[ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/context.cc.o
Scanning dependencies of target ATEN_CPU_FILES_GEN_TARGET
[ 23%] Generating ../aten/src/ATen/CPUType.cpp, ../aten/src/ATen/CPUType.h, ../aten/src/ATen/Declarations.yaml, ../aten/src/ATen/Functions.h, ../aten/src/ATen/LegacyTHFunctionsCPU.cpp, ../aten/src/ATen/LegacyTHFunctionsCPU.h, ../aten/src/ATen/MkldnnCPUType.cpp, ../aten/src/ATen/MkldnnCPUType.h, ../aten/src/ATen/NativeFunctions.h, ../aten/src/ATen/QuantizedCPUType.cpp, ../aten/src/ATen/QuantizedCPUType.h, ../aten/src/ATen/RegistrationDeclarations.h, ../aten/src/ATen/SparseCPUType.cpp, ../aten/src/ATen/SparseCPUType.h, ../aten/src/ATen/TypeDefault.cpp, ../aten/src/ATen/TypeDefault.h, ../aten/src/ATen/CUDAType.cpp, ../aten/src/ATen/CUDAType.h, ../aten/src/ATen/LegacyTHFunctionsCUDA.cpp, ../aten/src/ATen/LegacyTHFunctionsCUDA.h, ../aten/src/ATen/SparseCUDAType.cpp, ../aten/src/ATen/SparseCUDAType.h
[ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/device.cc.o
[ 23%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/base/operand.cpp.o
[ 23%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/message_lite.cc.o
[ 23%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/repeated_field.cc.o
[ 23%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/bytestream.cc.o
[ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/pair.cc.o
[ 23%] Building CXX object third_party/benchmark/src/CMakeFiles/benchmark.dir/csv_reporter.cc.o
[ 23%] Building CXX object third_party/benchmark/src/CMakeFiles/benchmark.dir/json_reporter.cc.o
[ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/unbound_buffer.cc.o
[ 23%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/cpu_batch_normalization_utils.cpp.o
[ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/tcp/address.cc.o
[ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/tcp/buffer.cc.o
[ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/tcp/context.cc.o
[ 23%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/base/osutils.cpp.o
[ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/tcp/device.cc.o
[ 23%] Building CXX object c10/CMakeFiles/c10.dir/core/TensorOptions.cpp.o
[ 23%] Building CXX object third_party/benchmark/src/CMakeFiles/benchmark.dir/reporter.cc.o
[ 23%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/common.cc.o
[ 23%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/int128.cc.o
[ 23%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/io/coded_stream.cc.o
[ 23%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/io/zero_copy_stream.cc.o
[ 23%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/base/regalloc.cpp.o
[ 23%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/base/runtime.cpp.o
[ 23%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/cpu_concat.cpp.o
[ 23%] Building CXX object third_party/benchmark/src/CMakeFiles/benchmark.dir/sleep.cc.o
[ 23%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/tcp/pair.cc.o
[ 23%] Building CXX object c10/CMakeFiles/c10.dir/core/TensorTypeId.cpp.o
[ 23%] Building CXX object c10/CMakeFiles/c10.dir/core/TensorTypeIdRegistration.cpp.o
[ 23%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/io_win32.cc.o
[ 23%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/status.cc.o
[ 23%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/statusor.cc.o
[ 23%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/stringpiece.cc.o
[ 23%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/stringprintf.cc.o
[ 23%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/base/string.cpp.o
[ 23%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/io/zero_copy_stream_impl_lite.cc.o
[ 23%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/base/utils.cpp.o
[ 23%] Building CXX object third_party/benchmark/src/CMakeFiles/benchmark.dir/statistics.cc.o
[ 23%] Building CXX object third_party/benchmark/src/CMakeFiles/benchmark.dir/string_util.cc.o
[ 23%] Building CXX object third_party/benchmark/src/CMakeFiles/benchmark.dir/sysinfo.cc.o
[ 23%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/message_lite.cc.o
[ 23%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/base/vmem.cpp.o
[ 23%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/base/zone.cpp.o
[ 23%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/cpu_engine.cpp.o
[ 23%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/structurally_valid.cc.o
[ 23%] Building CXX object c10/CMakeFiles/c10.dir/core/UndefinedTensorImpl.cpp.o
[ 23%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/cpu_memory.cpp.o
[ 23%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/FbgemmConv.cc.o
[ 23%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/FbgemmI8Spmdm.cc.o
[ 23%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/GenerateKernelU8S8S32ACC16.cc.o
[ 23%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/x86/x86assembler.cpp.o
[ 23%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/x86/x86builder.cpp.o
[ 23%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/repeated_field.cc.o
[ 23%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/strutil.cc.o
[ 24%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/time.cc.o
[ 24%] Building CXX object third_party/benchmark/src/CMakeFiles/benchmark.dir/timers.cc.o
[ 24%] Building CXX object third_party/gloo/gloo/CMakeFiles/gloo.dir/transport/tcp/unbound_buffer.cc.o
[ 24%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/GenerateKernelU8S8S32ACC16Avx512.cc.o
[ 24%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/stubs/bytestream.cc.o
[ 24%] Building CXX object c10/CMakeFiles/c10.dir/core/impl/DeviceGuardImplInterface.cpp.o
[ 24%] Linking CXX static library ../../../lib/libgloo.a
[ 24%] Built target gloo
Scanning dependencies of target ATEN_CUDA_FILES_GEN_TARGET
[ 24%] Generating ../aten/src/ATen/CPUType.cpp, ../aten/src/ATen/CPUType.h, ../aten/src/ATen/Declarations.yaml, ../aten/src/ATen/Functions.h, ../aten/src/ATen/LegacyTHFunctionsCPU.cpp, ../aten/src/ATen/LegacyTHFunctionsCPU.h, ../aten/src/ATen/MkldnnCPUType.cpp, ../aten/src/ATen/MkldnnCPUType.h, ../aten/src/ATen/NativeFunctions.h, ../aten/src/ATen/QuantizedCPUType.cpp, ../aten/src/ATen/QuantizedCPUType.h, ../aten/src/ATen/RegistrationDeclarations.h, ../aten/src/ATen/SparseCPUType.cpp, ../aten/src/ATen/SparseCPUType.h, ../aten/src/ATen/TypeDefault.cpp, ../aten/src/ATen/TypeDefault.h, ../aten/src/ATen/CUDAType.cpp, ../aten/src/ATen/CUDAType.h, ../aten/src/ATen/LegacyTHFunctionsCUDA.cpp, ../aten/src/ATen/LegacyTHFunctionsCUDA.h, ../aten/src/ATen/SparseCUDAType.cpp, ../aten/src/ATen/SparseCUDAType.h
Scanning dependencies of target common
[ 24%] Building C object sleef/src/common/CMakeFiles/common.dir/common.c.o
Scanning dependencies of target mkrename
[ 24%] Building C object sleef/src/libm/CMakeFiles/mkrename.dir/mkrename.c.o
[ 24%] Linking CXX static library ../../../lib/libbenchmark.a
[ 24%] Built target benchmark
[ 24%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/x86/x86compiler.cpp.o
[ 24%] Built target common
[ 24%] Building CXX object c10/CMakeFiles/c10.dir/core/thread_pool.cpp.o
[ 24%] Linking C executable ../../bin/mkrename
[ 24%] Built target mkrename
Scanning dependencies of target mkdisp
[ 24%] Building C object sleef/src/libm/CMakeFiles/mkdisp.dir/mkdisp.c.o
[ 24%] Built target ATEN_CPU_FILES_GEN_TARGET
[ 24%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/x86/x86internal.cpp.o
[ 24%] Linking C executable ../../bin/mkdisp
[ 24%] Built target mkdisp
[ 24%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/stubs/common.cc.o
[ 24%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/stubs/int128.cc.o
[ 24%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/x86/x86inst.cpp.o
[ 24%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/wire_format_lite.cc.o
[ 24%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/any.cc.o
[ 24%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/any.pb.cc.o
/root/pytorch/third_party/fbgemm/src/GenerateKernelU8S8S32ACC16Avx512.cc:170:7: warning: instantiation of variable 'fbgemm::CodeGenBase<unsigned char, signed char, int, short>::codeCache_' required here, but no definition is available [-Wundefined-var-template]
if (codeCache_.find(kernelSig) != codeCache_.end()) {
^
/root/pytorch/third_party/fbgemm/src/GenerateKernel.h:196:7: note: forward declaration of template entity is here
codeCache_; ///< JIT Code Cache for reuse.
^
/root/pytorch/third_party/fbgemm/src/GenerateKernelU8S8S32ACC16Avx512.cc:170:7: note: add an explicit instantiation declaration to suppress this warning if 'fbgemm::CodeGenBase<unsigned char, signed char, int, short>::codeCache_' is explicitly instantiated in another translation unit
if (codeCache_.find(kernelSig) != codeCache_.end()) {
^
/root/pytorch/third_party/fbgemm/src/GenerateKernelU8S8S32ACC16Avx512.cc:174:3: warning: instantiation of variable 'fbgemm::CodeGenBase<unsigned char, signed char, int, short>::code_' required here, but no definition is available [-Wundefined-var-template]
code_.reset(false);
^
/root/pytorch/third_party/fbgemm/src/GenerateKernel.h:191:42: note: forward declaration of template entity is here
static thread_local asmjit::CodeHolder code_; ///< JIT Code Holder for asmjit.
^
/root/pytorch/third_party/fbgemm/src/GenerateKernelU8S8S32ACC16Avx512.cc:174:3: note: add an explicit instantiation declaration to suppress this warning if 'fbgemm::CodeGenBase<unsigned char, signed char, int, short>::code_' is explicitly instantiated in another translation unit
code_.reset(false);
^
/root/pytorch/third_party/fbgemm/src/GenerateKernelU8S8S32ACC16Avx512.cc:175:14: warning: instantiation of variable 'fbgemm::CodeGenBase<unsigned char, signed char, int, short>::rt_' required here, but no definition is available [-Wundefined-var-template]
code_.init(rt_.getCodeInfo());
^
/root/pytorch/third_party/fbgemm/src/GenerateKernel.h:190:42: note: forward declaration of template entity is here
static thread_local asmjit::JitRuntime rt_; ///< JIT Runtime for asmjit.
^
/root/pytorch/third_party/fbgemm/src/GenerateKernelU8S8S32ACC16Avx512.cc:175:14: note: add an explicit instantiation declaration to suppress this warning if 'fbgemm::CodeGenBase<unsigned char, signed char, int, short>::rt_' is explicitly instantiated in another translation unit
code_.init(rt_.getCodeInfo());
^
[ 24%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/api.pb.cc.o
[ 24%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/stubs/io_win32.cc.o
[ 24%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/stubs/status.cc.o
3 warnings generated.
[ 24%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/GenerateKernelU8S8S32ACC32.cc.o
[ 24%] Building CXX object c10/CMakeFiles/c10.dir/util/Array.cpp.o
[ 24%] Building CXX object c10/CMakeFiles/c10.dir/util/Backtrace.cpp.o
[ 24%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/x86/x86instimpl.cpp.o
[ 24%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/x86/x86logging.cpp.o
[ 24%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/x86/x86operand.cpp.o
[ 24%] Building CXX object c10/CMakeFiles/c10.dir/util/C++17.cpp.o
[ 24%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_avx2.dir/src/QuantUtilsAvx2.cc.o
[ 24%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/stubs/statusor.cc.o
[ 24%] Building CXX object c10/CMakeFiles/c10.dir/util/Exception.cpp.o
[ 24%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/compiler/importer.cc.o
[ 24%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/compiler/parser.cc.o
[ 24%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/descriptor.cc.o
[ 24%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/descriptor.pb.cc.o
[ 24%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/descriptor_database.cc.o
[ 24%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/stubs/stringpiece.cc.o
[ 24%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/GenerateKernelU8S8S32ACC32Avx512.cc.o
[ 24%] Building CXX object c10/CMakeFiles/c10.dir/util/Half.cpp.o
[ 24%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/x86/x86operand_regs.cpp.o
[ 25%] Building CXX object c10/CMakeFiles/c10.dir/util/LeftRight.cpp.o
[ 26%] Building CXX object third_party/fbgemm/asmjit/CMakeFiles/asmjit.dir/src/asmjit/x86/x86regalloc.cpp.o
[ 26%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/stubs/stringprintf.cc.o
[ 26%] Building CXX object c10/CMakeFiles/c10.dir/util/Logging.cpp.o
[ 26%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/duration.pb.cc.o
/root/pytorch/third_party/fbgemm/src/GenerateKernelU8S8S32ACC32Avx512.cc:167:7: warning: instantiation of variable 'fbgemm::CodeGenBase<unsigned char, signed char, int, int>::codeCache_' required here, but no definition is available [-Wundefined-var-template]
if (codeCache_.find(kernelSig) != codeCache_.end()) {
^
/root/pytorch/third_party/fbgemm/src/GenerateKernel.h:196:7: note: forward declaration of template entity is here
codeCache_; ///< JIT Code Cache for reuse.
^
/root/pytorch/third_party/fbgemm/src/GenerateKernelU8S8S32ACC32Avx512.cc:167:7: note: add an explicit instantiation declaration to suppress this warning if 'fbgemm::CodeGenBase<unsigned char, signed char, int, int>::codeCache_' is explicitly instantiated in another translation unit
if (codeCache_.find(kernelSig) != codeCache_.end()) {
^
[ 26%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/stubs/structurally_valid.cc.o
/root/pytorch/third_party/fbgemm/src/GenerateKernelU8S8S32ACC32Avx512.cc:170:3: warning: instantiation of variable 'fbgemm::CodeGenBase<unsigned char, signed char, int, int>::code_' required here, but no definition is available [-Wundefined-var-template]
code_.reset(false);
^
/root/pytorch/third_party/fbgemm/src/GenerateKernel.h:191:42: note: forward declaration of template entity is here
static thread_local asmjit::CodeHolder code_; ///< JIT Code Holder for asmjit.
^
/root/pytorch/third_party/fbgemm/src/GenerateKernelU8S8S32ACC32Avx512.cc:170:3: note: add an explicit instantiation declaration to suppress this warning if 'fbgemm::CodeGenBase<unsigned char, signed char, int, int>::code_' is explicitly instantiated in another translation unit
code_.reset(false);
^
/root/pytorch/third_party/fbgemm/src/GenerateKernelU8S8S32ACC32Avx512.cc:171:14: warning: instantiation of variable 'fbgemm::CodeGenBase<unsigned char, signed char, int, int>::rt_' required here, but no definition is available [-Wundefined-var-template]
code_.init(rt_.getCodeInfo());
^
/root/pytorch/third_party/fbgemm/src/GenerateKernel.h:190:42: note: forward declaration of template entity is here
static thread_local asmjit::JitRuntime rt_; ///< JIT Runtime for asmjit.
^
/root/pytorch/third_party/fbgemm/src/GenerateKernelU8S8S32ACC32Avx512.cc:171:14: note: add an explicit instantiation declaration to suppress this warning if 'fbgemm::CodeGenBase<unsigned char, signed char, int, int>::rt_' is explicitly instantiated in another translation unit
code_.init(rt_.getCodeInfo());
^
3 warnings generated.
[ 26%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/GroupwiseConvAcc32Avx2.cc.o
[ 26%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/PackAMatrix.cc.o
[ 26%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/dynamic_message.cc.o
[ 26%] Building CXX object c10/CMakeFiles/c10.dir/util/Metaprogramming.cpp.o
[ 26%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/stubs/strutil.cc.o
[ 26%] Built target ATEN_CUDA_FILES_GEN_TARGET
[ 26%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/stubs/time.cc.o
[ 26%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf-lite.dir/__/src/google/protobuf/wire_format_lite.cc.o
Scanning dependencies of target renamedsp256.h_generated
[ 26%] Generating renamedsp256.h
[ 26%] Built target renamedsp256.h_generated
[ 26%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_avx2.dir/src/UtilsAvx2.cc.o
[ 26%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/PackAWithIm2Col.cc.o
[ 26%] Building CXX object c10/CMakeFiles/c10.dir/util/Optional.cpp.o
[ 26%] Linking CXX static library ../../../lib/libasmjit.a
[ 26%] Built target asmjit
[ 26%] Building CXX object c10/CMakeFiles/c10.dir/util/SmallVector.cpp.o
[ 26%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/cpu_primitive.cpp.o
[ 26%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/cpu_reducer.cpp.o
[ 26%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/cpu_reorder.cpp.o
[ 26%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/cpu_sum.cpp.o
[ 26%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm/f32/gemm_utils_f32.cpp.o
[ 26%] Linking CXX static library ../../../lib/libprotobuf-lite.a
[ 26%] Built target libprotobuf-lite
[ 26%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm/f32/jit_avx512_common_gemm_f32.cpp.o
[ 26%] Building CXX object c10/CMakeFiles/c10.dir/util/StringUtil.cpp.o
[ 26%] Building CXX object c10/CMakeFiles/c10.dir/util/Type.cpp.o
[ 26%] Building CXX object c10/CMakeFiles/c10.dir/util/TypeList.cpp.o
Scanning dependencies of target renameSSE4.h_generated
[ 26%] Generating include/renamesse4.h
Generating renamesse4.h: mkrename 2 4 sse4
[ 26%] Built target renameSSE4.h_generated
[ 26%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm/f32/jit_avx_gemm_f32.cpp.o
[ 26%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm/f32/ref_gemm_f32.cpp.o
[ 26%] Building CXX object c10/CMakeFiles/c10.dir/util/TypeTraits.cpp.o
[ 26%] Building CXX object c10/CMakeFiles/c10.dir/util/UniqueVoidPtr.cpp.o
[ 26%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/PackBMatrix.cc.o
[ 26%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/PackMatrix.cc.o
[ 26%] Building CXX object c10/CMakeFiles/c10.dir/util/flags_use_gflags.cpp.o
[ 26%] Building CXX object c10/CMakeFiles/c10.dir/util/flags_use_no_gflags.cpp.o
[ 26%] Building CXX object c10/CMakeFiles/c10.dir/util/intrusive_ptr.cpp.o
[ 26%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/PackAWithQuantRowOffset.cc.o
[ 26%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/empty.pb.cc.o
[ 26%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/extension_set_heavy.cc.o
[ 26%] Building CXX object c10/CMakeFiles/c10.dir/util/numa.cpp.o
[ 26%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm/gemm.cpp.o
[ 26%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/PackAWithRowOffset.cc.o
[ 26%] Building CXX object c10/CMakeFiles/c10.dir/util/thread_name.cpp.o
[ 26%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/field_mask.pb.cc.o
[ 26%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/PackWeightMatrixForGConv.cc.o
[ 26%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm/s8x8s32/jit_avx512_core_gemm_s8s8s32.cpp.o
[ 26%] Building CXX object c10/CMakeFiles/c10.dir/util/typeid.cpp.o
[ 26%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/PackWeightsForConv.cc.o
[ 26%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/QuantUtils.cc.o
[ 26%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/RefImplementations.cc.o
[ 26%] Building CXX object third_party/fbgemm/CMakeFiles/fbgemm_generic.dir/src/Utils.cc.o
[ 26%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/generated_message_reflection.cc.o
[ 26%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/generated_message_table_driven.cc.o
[ 26%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/io/gzip_stream.cc.o
Scanning dependencies of target renameAVX.h_generated
[ 26%] Generating include/renameavx.h
Generating renameavx.h: mkrename 4 8 avx
[ 26%] Built target renameAVX.h_generated
[ 26%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm/s8x8s32/jit_avx512_core_gemm_s8u8s32.cpp.o
[ 26%] Linking CXX shared library ../lib/libc10.so
[ 26%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm/s8x8s32/jit_avx512_core_gemm_s8u8s32_kern.cpp.o
[ 26%] Built target c10
[ 26%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm/s8x8s32/jit_avx512_core_gemv_s8u8s32.cpp.o
[ 26%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm/s8x8s32/jit_avx512_core_kernel_gemv_s8u8s32_kern.cpp.o
[ 27%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm/s8x8s32/jit_avx512_core_u8_copy_an_kern.cpp.o
[ 27%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/io/printer.cc.o
[ 27%] Built target fbgemm_generic
[ 27%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/io/strtod.cc.o
[ 27%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm/s8x8s32/jit_avx512_core_u8_copy_at_kern.cpp.o
[ 27%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm/s8x8s32/jit_avx512_core_u8_copy_bn_kern.cpp.o
[ 27%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/io/tokenizer.cc.o
[ 27%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm/s8x8s32/jit_avx512_core_u8_copy_bt_kern.cpp.o
[ 27%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm/s8x8s32/jit_avx512_core_u8_copy_sum_an_kern.cpp.o
[ 27%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/io/zero_copy_stream_impl.cc.o
Scanning dependencies of target renameFMA4.h_generated
[ 27%] Generating include/renamefma4.h
Generating renamefma4.h: mkrename 4 8 fma4
[ 27%] Built target renameFMA4.h_generated
[ 27%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/map_field.cc.o
Scanning dependencies of target renameAVX2128.h_generated
[ 27%] Generating include/renameavx2128.h
Generating renameavx2128.h: mkrename 2 4 avx2128
[ 27%] Built target renameAVX2128.h_generated
[ 27%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/message.cc.o
[ 27%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/reflection_ops.cc.o
[ 27%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/service.cc.o
[ 27%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm/s8x8s32/jit_avx512_core_u8_copy_sum_at_kern.cpp.o
[ 27%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/source_context.pb.cc.o
[ 27%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/struct.pb.cc.o
[ 27%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm/s8x8s32/jit_avx512_core_u8_copy_sum_bn_kern.cpp.o
Scanning dependencies of target renameAVX2.h_generated
[ 27%] Generating include/renameavx2.h
Generating renameavx2.h: mkrename 4 8 avx2
[ 27%] Built target renameAVX2.h_generated
[ 27%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/mathlimits.cc.o
[ 27%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/stubs/substitute.cc.o
[ 27%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/text_format.cc.o
Scanning dependencies of target mkalias
[ 27%] Building C object sleef/src/libm/CMakeFiles/mkalias.dir/mkalias.c.o
Scanning dependencies of target renameAVX512F.h_generated
[ 27%] Generating include/renameavx512f.h
Generating renameavx512f.h: mkrename 8 16 avx512f
[ 27%] Built target renameAVX512F.h_generated
[ 27%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm/s8x8s32/jit_avx512_core_u8_copy_sum_bt_kern.cpp.o
[ 27%] Linking C executable ../../bin/mkalias
[ 27%] Built target mkalias
[ 27%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm/s8x8s32/ref_gemm_s8x8s32.cpp.o
[ 27%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm_convolution.cpp.o
[ 27%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm_convolution_utils.cpp.o
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/timestamp.pb.cc.o
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/type.pb.cc.o
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/unknown_field_set.cc.o
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/delimited_message_util.cc.o
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/field_comparator.cc.o
Scanning dependencies of target renamedsp128.h_generated
[ 28%] Generating renamedsp128.h
[ 28%] Built target renamedsp128.h_generated
Scanning dependencies of target dispsse.c_generated
[ 28%] Generating dispsse.c
[ 28%] Built target dispsse.c_generated
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/field_mask_util.cc.o
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/datapiece.cc.o
Scanning dependencies of target renameSSE2.h_generated
[ 28%] Generating include/renamesse2.h
Generating renamesse2.h: mkrename 2 4 sse2
[ 28%] Built target renameSSE2.h_generated
Scanning dependencies of target caffe2_nvrtc
[ 28%] Building CXX object caffe2/CMakeFiles/caffe2_nvrtc.dir/__/aten/src/ATen/hip/nvrtc_stub/ATenNVRTC.cpp.o
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/default_value_objectwriter.cc.o
[ 28%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm_inner_product.cpp.o
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/error_listener.cc.o
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/field_mask_utility.cc.o
[ 28%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm_x8s8s32x_convolution.cpp.o
[ 28%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/gemm_x8s8s32x_inner_product.cpp.o
In file included from /root/pytorch/aten/src/ATen/hip/nvrtc_stub/ATenNVRTC.cpp:1:
/root/pytorch/aten/src/ATen/hip/nvrtc_stub/ATenNVRTC.h:83:3: warning: 'hipCtxGetCurrent' is deprecated: This API is marked as deprecated and may not be supported in future releases.For more details please refer https://github.com/ROCm-Developer-Tools/HIP/tree/master/docs/markdown/hip_deprecated_api_list [-Wdeprecated-declarations]
AT_FORALL_NVRTC(CREATE_MEMBER)
 ^
/root/pytorch/aten/src/ATen/hip/nvrtc_stub/ATenNVRTC.h:75:5: note: expanded from macro 'AT_FORALL_NVRTC'
_(hipCtxGetCurrent) \
 ^
/opt/rocm/hip/include/hip/hcc_detail/hip_runtime_api.h:2224:1: note: 'hipCtxGetCurrent' has been explicitly marked deprecated here
DEPRECATED(DEPRECATED_MSG)
^
/opt/rocm/hip/include/hip/hcc_detail/hip_runtime_api.h:56:41: note: expanded from macro 'DEPRECATED'
#define DEPRECATED(msg) __attribute__ ((deprecated(msg)))
 ^
/root/pytorch/aten/src/ATen/hip/nvrtc_stub/ATenNVRTC.cpp:9:3: warning: 'hipCtxGetCurrent' is deprecated: This API is marked as deprecated and may not be supported in future releases.For more details please refer https://github.com/ROCm-Developer-Tools/HIP/tree/master/docs/markdown/hip_deprecated_api_list [-Wdeprecated-declarations]
AT_FORALL_NVRTC(CREATE_ASSIGN)
 ^
/root/pytorch/aten/src/ATen/hip/nvrtc_stub/ATenNVRTC.h:75:5: note: expanded from macro 'AT_FORALL_NVRTC'
_(hipCtxGetCurrent) \
 ^
/opt/rocm/hip/include/hip/hcc_detail/hip_runtime_api.h:2224:1: note: 'hipCtxGetCurrent' has been explicitly marked deprecated here
DEPRECATED(DEPRECATED_MSG)
^
/opt/rocm/hip/include/hip/hcc_detail/hip_runtime_api.h:56:41: note: expanded from macro 'DEPRECATED'
#define DEPRECATED(msg) __attribute__ ((deprecated(msg)))
 ^
2 warnings generated.
[ 28%] Linking CXX shared library ../lib/libcaffe2_nvrtc.so
[ 28%] Built target caffe2_nvrtc
[ 28%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_avx2_1x1_conv_kernel_f32.cpp.o
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/json_escaping.cc.o
[ 28%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_avx2_1x1_convolution.cpp.o
[ 28%] Built target fbgemm_avx2
[ 28%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_avx2_conv_kernel_f32.cpp.o
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/json_objectwriter.cc.o
Scanning dependencies of target mkrename_gnuabi
[ 28%] Building C object sleef/src/libm/CMakeFiles/mkrename_gnuabi.dir/mkrename_gnuabi.c.o
[ 28%] Linking C executable ../../bin/mkrename_gnuabi
[ 28%] Built target mkrename_gnuabi
[ 28%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_avx2_convolution.cpp.o
[ 28%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_avx512_common_1x1_conv_kernel.cpp.o
[ 28%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_avx512_common_1x1_convolution.cpp.o
[ 28%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_avx512_common_conv_kernel.cpp.o
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/json_stream_parser.cc.o
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/object_writer.cc.o
Scanning dependencies of target mkmasked_gnuabi
[ 28%] Building C object sleef/src/libm/CMakeFiles/mkmasked_gnuabi.dir/mkmasked_gnuabi.c.o
[ 28%] Linking C executable ../../bin/mkmasked_gnuabi
Scanning dependencies of target arraymap
[ 28%] Building C object sleef/src/common/CMakeFiles/arraymap.dir/arraymap.c.o
[ 28%] Built target mkmasked_gnuabi
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/proto_writer.cc.o
[ 28%] Built target arraymap
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/protostream_objectsource.cc.o
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/protostream_objectwriter.cc.o
[ 28%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_avx512_common_conv_winograd_kernel_f32.cpp.o
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/type_info.cc.o
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/type_info_test_helper.cc.o
In file included from /root/pytorch/third_party/protobuf/src/google/protobuf/util/internal/proto_writer.cc:31:
/root/pytorch/third_party/protobuf/src/google/protobuf/util/internal/proto_writer.h:113:24: warning: 'RenderBytes' overrides a member function but is not marked 'override' [-Winconsistent-missing-override]
virtual ProtoWriter* RenderBytes(StringPiece name, StringPiece value) {
^
/root/pytorch/third_party/protobuf/src/google/protobuf/util/internal/object_writer.h:99:25: note: overridden virtual function is here
virtual ObjectWriter* RenderBytes(StringPiece name, StringPiece value) = 0;
^
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/internal/utility.cc.o
In file included from /root/pytorch/third_party/protobuf/src/google/protobuf/util/internal/protostream_objectwriter.cc:31:
In file included from /root/pytorch/third_party/protobuf/src/google/protobuf/util/internal/protostream_objectwriter.h:45:
/root/pytorch/third_party/protobuf/src/google/protobuf/util/internal/proto_writer.h:113:24: warning: 'RenderBytes' overrides a member function but is not marked 'override' [-Winconsistent-missing-override]
virtual ProtoWriter* RenderBytes(StringPiece name, StringPiece value) {
^
/root/pytorch/third_party/protobuf/src/google/protobuf/util/internal/object_writer.h:99:25: note: overridden virtual function is here
virtual ObjectWriter* RenderBytes(StringPiece name, StringPiece value) = 0;
^
1 warning generated.
Scanning dependencies of target generate-torch-sources
[ 28%] Generating ../../torch/csrc/autograd/generated/Functions.cpp, ../../torch/csrc/autograd/generated/VariableType_0.cpp, ../../torch/csrc/autograd/generated/VariableType_1.cpp, ../../torch/csrc/autograd/generated/VariableType_2.cpp, ../../torch/csrc/autograd/generated/VariableType_3.cpp, ../../torch/csrc/autograd/generated/VariableType_4.cpp, ../../torch/csrc/jit/generated/register_aten_ops_0.cpp, ../../torch/csrc/jit/generated/register_aten_ops_1.cpp, ../../torch/csrc/jit/generated/register_aten_ops_2.cpp, ../../torch/csrc/nn/THNN.cpp, ../../torch/csrc/nn/THCUNN.cpp, ../../torch/csrc/autograd/generated/VariableType.h, ../../torch/csrc/autograd/generated/Functions.h, ../../torch/csrc/autograd/generated/variable_factories.h, ../../torch/csrc/autograd/generated/python_functions.cpp, ../../torch/csrc/autograd/generated/python_variable_methods.cpp, ../../torch/csrc/autograd/generated/python_torch_functions.cpp, ../../torch/csrc/autograd/generated/python_nn_functions.cpp, ../../torch/csrc/autograd/generated/python_functions.h, ../../torch/csrc/autograd/generated/python_variable_methods_dispatch.h, ../../torch/csrc/autograd/generated/python_torch_functions_dispatch.h, ../../torch/csrc/autograd/generated/python_nn_functions.h, ../../torch/csrc/autograd/generated/python_nn_functions_dispatch.h
In file included from /root/pytorch/third_party/protobuf/src/google/protobuf/util/internal/type_info_test_helper.cc:31:
In file included from /root/pytorch/third_party/protobuf/src/google/protobuf/util/internal/type_info_test_helper.h:42:
In file included from /root/pytorch/third_party/protobuf/src/google/protobuf/util/internal/protostream_objectwriter.h:45:
/root/pytorch/third_party/protobuf/src/google/protobuf/util/internal/proto_writer.h:113:24: warning: 'RenderBytes' overrides a member function but is not marked 'override' [-Winconsistent-missing-override]
virtual ProtoWriter* RenderBytes(StringPiece name, StringPiece value) {
^
/root/pytorch/third_party/protobuf/src/google/protobuf/util/internal/object_writer.h:99:25: note: overridden virtual function is here
virtual ObjectWriter* RenderBytes(StringPiece name, StringPiece value) = 0;
^
Scanning dependencies of target torch_python_stubs
[ 28%] Generating ../../../torch/__init__.pyi, ../../../torch/nn/functional.pyi
1 warning generated.
[ 28%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_avx512_common_convolution.cpp.o
[ 28%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_avx512_common_convolution_winograd.cpp.o
1 warning generated.
[ 28%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_avx512_common_lrn.cpp.o
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/json_util.cc.o
[ 28%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_avx512_core_fp32_wino_conv_2x3.cpp.o
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/message_differencer.cc.o
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/time_util.cc.o
[ 28%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_avx512_core_fp32_wino_conv_4x3.cpp.o
In file included from /root/pytorch/third_party/protobuf/src/google/protobuf/util/json_util.cc:41:
In file included from /root/pytorch/third_party/protobuf/src/google/protobuf/util/internal/protostream_objectwriter.h:45:
/root/pytorch/third_party/protobuf/src/google/protobuf/util/internal/proto_writer.h:113:24: warning: 'RenderBytes' overrides a member function but is not marked 'override' [-Winconsistent-missing-override]
virtual ProtoWriter* RenderBytes(StringPiece name, StringPiece value) {
^
/root/pytorch/third_party/protobuf/src/google/protobuf/util/internal/object_writer.h:99:25: note: overridden virtual function is here
virtual ObjectWriter* RenderBytes(StringPiece name, StringPiece value) = 0;
^
1 warning generated.
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/util/type_resolver_util.cc.o
[ 28%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_avx512_core_fp32_wino_conv_4x3_kernel.cpp.o
[ 28%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/wire_format.cc.o
[ 28%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_avx512_core_u8s8s32x_wino_convolution.cpp.o
[ 28%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_avx512_core_x8s8s32x_1x1_conv_kernel.cpp.o
[ 28%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_avx512_core_x8s8s32x_1x1_convolution.cpp.o
[ 29%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_avx512_core_x8s8s32x_conv_kernel.cpp.o
[ 29%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_avx512_core_x8s8s32x_convolution.cpp.o
[ 29%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_avx512_core_x8s8s32x_deconvolution.cpp.o
[ 29%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotobuf.dir/__/src/google/protobuf/wrappers.pb.cc.o
[ 29%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_sse42_1x1_conv_kernel_f32.cpp.o
[ 29%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_sse42_1x1_convolution.cpp.o
[ 29%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_sse42_conv_kernel_f32.cpp.o
Writing ./torch/__init__.pyi
Writing ./torch/nn/functional.pyi
[ 29%] Built target torch_python_stubs
Scanning dependencies of target cpuinfo
[ 29%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/init.c.o
[ 29%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_sse42_convolution.cpp.o
[ 29%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/api.c.o
[ 29%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/init.c.o
[ 29%] Linking CXX static library ../../../lib/libprotobuf.a
[ 29%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/info.c.o
[ 29%] Built target libprotobuf
[ 29%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/vendor.c.o
[ 29%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/uarch.c.o
Scanning dependencies of target cpuinfo_internals
[ 29%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/init.c.o
[ 29%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/name.c.o
[ 29%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/api.c.o
Scanning dependencies of target nnpack_reference_layers
[ 29%] Building C object confu-deps/NNPACK/CMakeFiles/nnpack_reference_layers.dir/src/ref/convolution-output.c.o
[ 29%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/init.c.o
[ 29%] Building C object confu-deps/NNPACK/CMakeFiles/nnpack_reference_layers.dir/src/ref/convolution-input-gradient.c.o
[ 30%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/topology.c.o
[ 30%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/info.c.o
[ 30%] Building C object confu-deps/NNPACK/CMakeFiles/nnpack_reference_layers.dir/src/ref/convolution-kernel.c.o
[ 30%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/vendor.c.o
[ 30%] Building C object confu-deps/NNPACK/CMakeFiles/nnpack_reference_layers.dir/src/ref/fully-connected-output.c.o
[ 30%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/isa.c.o
[ 30%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/cache/init.c.o
[ 30%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/uarch.c.o
[ 31%] Building C object confu-deps/NNPACK/CMakeFiles/nnpack_reference_layers.dir/src/ref/max-pooling-output.c.o
[ 31%] Building C object confu-deps/NNPACK/CMakeFiles/nnpack_reference_layers.dir/src/ref/softmax-output.c.o
[ 31%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/name.c.o
[ 31%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/cache/descriptor.c.o
[ 31%] Building C object confu-deps/NNPACK/CMakeFiles/nnpack_reference_layers.dir/src/ref/relu-output.c.o
[ 31%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_transpose_src_utils.cpp.o
[ 31%] Building C object confu-deps/NNPACK/CMakeFiles/nnpack_reference_layers.dir/src/ref/relu-input-gradient.c.o
[ 31%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/cache/deterministic.c.o
[ 31%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/linux/init.c.o
[ 31%] Linking C static library ../../lib/libnnpack_reference_layers.a
[ 31%] Built target nnpack_reference_layers
[ 31%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/x86/linux/cpuinfo.c.o
[ 31%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/topology.c.o
[ 31%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/isa.c.o
Scanning dependencies of target gmock
Scanning dependencies of target gtest_main
[ 31%] Building CXX object third_party/googletest/googlemock/gtest/CMakeFiles/gtest_main.dir/src/gtest_main.cc.o
[ 32%] Building CXX object third_party/googletest/googlemock/CMakeFiles/gmock.dir/src/gmock-all.cc.o
[ 32%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/cache/init.c.o
[ 32%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/cache/descriptor.c.o
[ 32%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/linux/smallfile.c.o
[ 32%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/cache/deterministic.c.o
[ 32%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/linux/multiline.c.o
[ 32%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/linux/init.c.o
[ 32%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/linux/current.c.o
[ 32%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/linux/cpulist.c.o
[ 32%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/linux/processors.c.o
[ 32%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/x86/linux/cpuinfo.c.o
Scanning dependencies of target benchmark_main
[ 32%] Building CXX object third_party/benchmark/src/CMakeFiles/benchmark_main.dir/benchmark_main.cc.o
[ 32%] Linking C static library ../../lib/libcpuinfo.a
[ 32%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/linux/smallfile.c.o
[ 32%] Built target cpuinfo
[ 32%] Building HIPCC object third_party/gloo/gloo/CMakeFiles/gloo_hip.dir/__/__/__/build/third_party/gloo/hip/gloo/gloo_hip_generated_hip_private.hip.o
[ 32%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/linux/multiline.c.o
[ 32%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/linux/current.c.o
[ 32%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/linux/cpulist.c.o
[ 32%] Building C object confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/linux/processors.c.o
[ 33%] Linking C static library ../../lib/libcpuinfo_internals.a
[ 33%] Built target cpuinfo_internals
[ 33%] Building HIPCC object third_party/gloo/gloo/CMakeFiles/gloo_hip.dir/__/__/__/build/third_party/gloo/hip/gloo/gloo_hip_generated_hip.hip.o
[ 33%] Building HIPCC object third_party/gloo/gloo/CMakeFiles/gloo_hip.dir/__/__/__/build/third_party/gloo/hip/gloo/gloo_hip_generated_hip_allreduce_bcube.cc.o
[ 33%] Linking CXX static library ../../../../lib/libgtest_main.a
[ 33%] Built target gtest_main
[ 33%] Building HIPCC object third_party/gloo/gloo/CMakeFiles/gloo_hip.dir/__/__/__/build/third_party/gloo/hip/gloo/gloo_hip_generated_hip_allreduce_halving_doubling.cc.o
[ 33%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_uni_batch_normalization.cpp.o
[ 33%] Linking CXX static library ../../../lib/libbenchmark_main.a
[ 33%] Built target benchmark_main
[ 33%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_uni_dw_conv_kernel_f32.cpp.o
Scanning dependencies of target c10_hip
[ 33%] Building CXX object c10/hip/CMakeFiles/c10_hip.dir/HIPCachingAllocator.cpp.o
[ 33%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_uni_dw_convolution.cpp.o
[ 33%] Building HIPCC object third_party/gloo/gloo/CMakeFiles/gloo_hip.dir/__/__/__/build/third_party/gloo/hip/gloo/gloo_hip_generated_hip_allreduce_local.cc.o
[ 33%] Building HIPCC object third_party/gloo/gloo/CMakeFiles/gloo_hip.dir/__/__/__/build/third_party/gloo/hip/gloo/gloo_hip_generated_hip_allreduce_ring.cc.o
[ 33%] Linking CXX static library ../../../lib/libgmock.a
[ 33%] Built target gmock
[ 33%] Building HIPCC object third_party/gloo/gloo/CMakeFiles/gloo_hip.dir/__/__/__/build/third_party/gloo/hip/gloo/gloo_hip_generated_hip_allreduce_ring_chunked.cc.o
[ 33%] Building CXX object c10/hip/CMakeFiles/c10_hip.dir/HIPStream.cpp.o
[ 33%] Building CXX object c10/hip/CMakeFiles/c10_hip.dir/impl/HIPGuardImpl.cpp.o
[ 33%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_uni_eltwise.cpp.o
[ 33%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_uni_i8i8_pooling.cpp.o
[ 33%] Building HIPCC object third_party/gloo/gloo/CMakeFiles/gloo_hip.dir/__/__/__/build/third_party/gloo/hip/gloo/gloo_hip_generated_hip_broadcast_one_to_all.cc.o
[ 33%] Building CXX object c10/hip/CMakeFiles/c10_hip.dir/impl/HIPTest.cpp.o
[ 33%] Linking CXX shared library ../../lib/libc10_hip.so
[ 33%] Built target c10_hip
[ 33%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_uni_lrn.cpp.o
Scanning dependencies of target __aten_op_header_gen
[ 33%] Generating contrib/aten/aten_op.h
[ 33%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_uni_lrn_kernel_f32.cpp.o
Scanning dependencies of target headers
[ 33%] Generating ../../../include/sleef.h
Generating sleef.h: mkrename 2 4 __m128d __m128 __m128i __m128i __SSE2__
Generating sleef.h: mkrename 2 4 __m128d __m128 __m128i __m128i __SSE2__ sse2
Generating sleef.h: mkrename 2 4 __m128d __m128 __m128i __m128i __SSE2__ sse4
Generating sleef.h: mkrename 4 8 __m256d __m256 __m128i struct\ {\ __m128i\ x,\ y;\ } __AVX__
Generating sleef.h: mkrename 4 8 __m256d __m256 __m128i struct\ {\ __m128i\ x,\ y;\ } __AVX__ avx
Generating sleef.h: mkrename 4 8 __m256d __m256 __m128i struct\ {\ __m128i\ x,\ y;\ } __AVX__ fma4
Generating sleef.h: mkrename 4 8 __m256d __m256 __m128i __m256i __AVX__ avx2
Generating sleef.h: mkrename 2 4 __m128d __m128 __m128i __m128i __SSE2__ avx2128
Generating sleef.h: mkrename 8 16 __m512d __m512 __m256i __m512i __AVX512F__
Generating sleef.h: mkrename 8 16 __m512d __m512 __m256i __m512i __AVX512F__ avx512f
[ 33%] Built target headers
[ 33%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_uni_pool_kernel_f32.cpp.o
Scanning dependencies of target dispavx.c_generated
[ 33%] Generating dispavx.c
[ 33%] Built target dispavx.c_generated
[ 33%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_uni_pooling.cpp.o
Scanning dependencies of target sleefsse4
[ 34%] Building C object sleef/src/libm/CMakeFiles/sleefsse4.dir/sleefsimdsp.c.o
[ 34%] Building C object sleef/src/libm/CMakeFiles/sleefsse4.dir/sleefsimddp.c.o
[ 34%] Built target sleefsse4
[ 34%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_uni_reorder.cpp.o
Scanning dependencies of target sleefavx
Scanning dependencies of target sleeffma4
[ 34%] Building C object sleef/src/libm/CMakeFiles/sleeffma4.dir/sleefsimdsp.c.o
[ 34%] Building C object sleef/src/libm/CMakeFiles/sleefavx.dir/sleefsimdsp.c.o
Scanning dependencies of target sleefavx2128
[ 34%] Building C object sleef/src/libm/CMakeFiles/sleefavx2128.dir/sleefsimdsp.c.o
[ 34%] Building C object sleef/src/libm/CMakeFiles/sleeffma4.dir/sleefsimddp.c.o
[ 34%] Building C object sleef/src/libm/CMakeFiles/sleefavx2128.dir/sleefsimddp.c.o
[ 34%] Building C object sleef/src/libm/CMakeFiles/sleefavx.dir/sleefsimddp.c.o
Writing torch/csrc/nn/THNN.cpp
Writing torch/csrc/nn/THCUNN.cpp
Writing torch/csrc/autograd/generated/python_functions.h
Writing torch/csrc/autograd/generated/python_functions.cpp
Writing torch/csrc/autograd/generated/python_variable_methods.cpp
Writing torch/csrc/autograd/generated/python_variable_methods_dispatch.h
Writing torch/csrc/autograd/generated/python_torch_functions.cpp
Writing torch/csrc/autograd/generated/python_torch_functions_dispatch.h
Writing torch/csrc/autograd/generated/python_nn_functions.cpp
Writing torch/csrc/autograd/generated/python_nn_functions.h
Writing torch/csrc/autograd/generated/python_nn_functions_dispatch.h
Writing torch/csrc/autograd/generated/VariableType.h
Writing torch/csrc/autograd/generated/VariableType_0.cpp
Writing torch/csrc/autograd/generated/VariableType_1.cpp
Writing torch/csrc/autograd/generated/VariableType_2.cpp
Writing torch/csrc/autograd/generated/VariableType_3.cpp
Writing torch/csrc/autograd/generated/VariableType_4.cpp
Writing torch/csrc/autograd/generated/VariableTypeEverything.cpp
Writing torch/csrc/autograd/generated/Functions.h
Writing torch/csrc/autograd/generated/Functions.cpp
Writing torch/csrc/autograd/generated/variable_factories.h
Writing torch/csrc/jit/generated/register_aten_ops_0.cpp
Writing torch/csrc/jit/generated/register_aten_ops_1.cpp
Writing torch/csrc/jit/generated/register_aten_ops_2.cpp
[ 34%] Built target sleefavx2128
Scanning dependencies of target sleefavx2
[ 34%] Building C object sleef/src/libm/CMakeFiles/sleefavx2.dir/sleefsimdsp.c.o
[ 34%] Building C object sleef/src/libm/CMakeFiles/sleefavx2.dir/sleefsimddp.c.o
[ 34%] Built target sleeffma4
Scanning dependencies of target alias_avx512f.h_generated
[ 34%] Generating alias_avx512f.h
[ 34%] Built target generate-torch-sources
Scanning dependencies of target dispsse_obj
[ 34%] Built target alias_avx512f.h_generated
[ 34%] Building C object sleef/src/libm/CMakeFiles/dispsse_obj.dir/dispsse.c.o
Scanning dependencies of target sleefsse2
[ 34%] Built target sleefavx
[ 34%] Building C object sleef/src/libm/CMakeFiles/sleefsse2.dir/sleefsimdsp.c.o
Scanning dependencies of target libprotoc
[ 34%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/code_generator.cc.o
Scanning dependencies of target qnnpack
[ 34%] Built target sleefavx2
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/init.c.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/add.c.o
[ 34%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/jit_uni_reorder_utils.cpp.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/average-pooling.c.o
Skipping backward Because of Ret: void (void)
Skipping backward Because of Ret: void (void)
Skipping backward Because of Ret: void (void)
Skipping backward Because of Ret: void (void)
Skipping set_data Because of Ret: void (void)
Skipping _cudnn_rnn_backward Because of Arg: std::array<bool,4> (std::array<bool,4>)
Skipping _cudnn_init_dropout_state because it is a factory method
Skipping _fused_dropout Because of Arg: Generator * (Generator *)
Skipping _sobol_engine_draw Because of Arg: c10::optional<ScalarType> (ScalarType)
Skipping arange because it is a factory method
Skipping argmax Because of Arg: c10::optional<int64_t> (int64_t)
Skipping argmax Because of Arg: c10::optional<int64_t> (int64_t)
Skipping argmin Because of Arg: c10::optional<int64_t> (int64_t)
Skipping argmin Because of Arg: c10::optional<int64_t> (int64_t)
Skipping as_strided Because of Arg: c10::optional<int64_t> (int64_t)
Skipping bartlett_window because it is a factory method
Skipping bernoulli Because of Arg: Generator * (Generator *)
Skipping bernoulli Because of Arg: Generator * (Generator *)
Skipping blackman_window because it is a factory method
Skipping clamp Because of Arg: c10::optional<Scalar> (Scalar)
Skipping clamp Because of Arg: c10::optional<Scalar> (Scalar)
Skipping contiguous Because of Arg: MemoryFormat (MemoryFormat)
Skipping cumsum Because of Arg: c10::optional<ScalarType> (ScalarType)
Skipping cumprod Because of Arg: c10::optional<ScalarType> (ScalarType)
Skipping einsum Because of Arg: std::string (std::string)
Skipping empty because it is a factory method
Skipping _empty_affine_quantized because it is a factory method
Skipping empty_like because it is a factory method
Skipping empty_strided because it is a factory method
Skipping eye because it is a factory method
Skipping full because it is a factory method
Skipping full_like because it is a factory method
Skipping from_file because it is a factory method
Skipping hann_window because it is a factory method
Skipping hamming_window because it is a factory method
Skipping _cufft_set_plan_cache_max_size Because of Ret: void (void)
Skipping _cufft_clear_plan_cache Because of Ret: void (void)
Skipping fbgemm_linear_quantize_weight Because of Ret: double (double)
Skipping linspace because it is a factory method
Skipping logspace because it is a factory method
Skipping log_softmax Because of Arg: c10::optional<ScalarType> (ScalarType)
Skipping mean Because of Arg: c10::optional<ScalarType> (ScalarType)
Skipping mean Because of Arg: c10::optional<ScalarType> (ScalarType)
Skipping miopen_rnn_backward Because of Arg: std::array<bool,4> (std::array<bool,4>)
Skipping ones because it is a factory method
Skipping ones_like because it is a factory method
Skipping scalar_tensor because it is a factory method
Skipping rand because it is a factory method
Skipping rand_like because it is a factory method
Skipping randint because it is a factory method
Skipping randint_like because it is a factory method
Skipping randn because it is a factory method
Skipping randn_like because it is a factory method
Skipping randperm because it is a factory method
Skipping range because it is a factory method
Skipping repeat_interleave Because of Arg: c10::optional<int64_t> (int64_t)
Skipping repeat_interleave Because of Arg: c10::optional<int64_t> (int64_t)
Skipping rrelu Because of Arg: Generator * (Generator *)
Skipping softmax Because of Arg: c10::optional<ScalarType> (ScalarType)
Skipping stft Because of Arg: c10::optional<int64_t> (int64_t)
Skipping stft Because of Arg: c10::optional<int64_t> (int64_t)
Skipping stft Because of Arg: c10::optional<int64_t> (int64_t)
Skipping stft Because of Arg: c10::optional<int64_t> (int64_t)
Skipping stft Because of Arg: c10::optional<int64_t> (int64_t)
Skipping sum Because of Arg: c10::optional<ScalarType> (ScalarType)
Skipping sum Because of Arg: c10::optional<ScalarType> (ScalarType)
Skipping prod Because of Arg: c10::optional<ScalarType> (ScalarType)
Skipping prod Because of Arg: c10::optional<ScalarType> (ScalarType)
Skipping unique_consecutive Because of Arg: c10::optional<int64_t> (int64_t)
Skipping zeros because it is a factory method
Skipping zeros_like because it is a factory method
Skipping _standard_gamma Because of Arg: Generator * (Generator *)
Skipping _sample_dirichlet Because of Arg: Generator * (Generator *)
Skipping poisson Because of Arg: Generator * (Generator *)
Skipping _sparse_sum Because of Arg: ScalarType (ScalarType)
Skipping _sparse_sum Because of Arg: ScalarType (ScalarType)
Skipping norm Because of Arg: c10::optional<Scalar> (Scalar)
Skipping norm Because of Arg: c10::optional<Scalar> (Scalar)
Skipping norm Because of Arg: c10::optional<Scalar> (Scalar)
Skipping norm Because of Arg: c10::optional<Scalar> (Scalar)
Skipping sparse_coo_tensor because it is a factory method
Skipping _sparse_coo_tensor_unsafe because it is a factory method
Skipping _sparse_coo_tensor_with_dims because it is a factory method
Skipping _sparse_coo_tensor_with_dims_and_tensors because it is a factory method
Skipping quantize_linear Because of Arg: ScalarType (ScalarType)
Skipping quantize_linear_per_channel Because of Arg: ScalarType (ScalarType)
Skipping _dequantize_linear Because of Arg: ScalarType (ScalarType)
Skipping q_scale Because of Ret: double (double)
Skipping qscheme Because of Ret: QScheme (QScheme)
Skipping to because it is a factory method
Skipping quantized_lstm Because of Arg: c10::optional<ScalarType> (ScalarType)
Skipping cross Because of Arg: c10::optional<int64_t> (int64_t)
Skipping tril_indices because it is a factory method
Skipping triu_indices because it is a factory method
Skipping multinomial Because of Arg: Generator * (Generator *)
Skipping _multinomial_alias_draw Because of Arg: Generator * (Generator *)
Skipping normal because it is a factory method
Skipping rrelu_with_noise Because of Arg: Generator * (Generator *)
Skipping avg_pool2d Because of Arg: c10::optional<int64_t> (int64_t)
Skipping avg_pool2d_backward Because of Arg: c10::optional<int64_t> (int64_t)
Skipping avg_pool3d Because of Arg: c10::optional<int64_t> (int64_t)
Skipping avg_pool3d_backward Because of Arg: c10::optional<int64_t> (int64_t)
[ 34%] Built target __aten_op_header_gen
[ 34%] Building C object sleef/src/libm/CMakeFiles/sleefsse2.dir/sleefsimddp.c.o
[ 34%] Built target dispsse_obj
[ 34%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/nchw_pooling.cpp.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/channel-shuffle.c.o
[ 34%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/ncsp_batch_normalization.cpp.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/clamp.c.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/convolution.c.o
[ 34%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/nhwc_pooling.cpp.o
[ 34%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/nspc_batch_normalization.cpp.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/deconvolution.c.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/fully-connected.c.o
[ 34%] Built target sleefsse2
[ 34%] Generating src/x86_64-fma/2d-fourier-8x8.py.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/global-average-pooling.c.o
Scanning dependencies of target gmock_main
[ 34%] Building CXX object third_party/googletest/googlemock/CMakeFiles/gmock_main.dir/src/gmock_main.cc.o
[ 34%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/command_line_interface.cc.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/leaky-relu.c.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/max-pooling.c.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/sigmoid.c.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/softargmax.c.o
[ 34%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/ref_batch_normalization.cpp.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/operator-delete.c.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/indirection.c.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/operator-run.c.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/u8lut32norm/scalar.c.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/x8lut/scalar.c.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/sgemm/6x8-psimd.c.o
[ 34%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/ref_convolution.cpp.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8avgpool/mp8x9p8q-sse2.c.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8avgpool/up8x9-sse2.c.o
[ 34%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/ref_deconvolution.cpp.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8avgpool/up8xm-sse2.c.o
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8conv/4x4c2-sse2.c.o
[ 34%] Linking CXX static library ../../../lib/libgmock_main.a
[ 34%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8dwconv/mp8x25-sse2.c.o
[ 34%] Built target gmock_main
Scanning dependencies of target fbgemm
[ 35%] Linking CXX static library ../../lib/libfbgemm.a
[ 35%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/ref_eltwise.cpp.o
[ 35%] Built target fbgemm
Scanning dependencies of target c10_typeid_test
Scanning dependencies of target c10_tempfile_test
[ 35%] Building CXX object c10/test/CMakeFiles/c10_typeid_test.dir/util/typeid_test.cpp.o
[ 35%] Building CXX object c10/test/CMakeFiles/c10_tempfile_test.dir/util/tempfile_test.cpp.o
[ 36%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8dwconv/up8x9-sse2.c.o
[ 36%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/ref_inner_product.cpp.o
[ 36%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8gavgpool/mp8x7p7q-sse2.c.o
[ 36%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8gavgpool/up8x7-sse2.c.o
[ 36%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8gavgpool/up8xm-sse2.c.o
[ 36%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8gemm/2x4c8-sse2.c.o
[ 36%] Generating src/x86_64-fma/2d-fourier-16x16.py.o
[ 36%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8gemm/4x4c2-sse2.c.o
[ 36%] Linking CXX executable ../../bin/c10_tempfile_test
[ 36%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/q8vadd/sse2.c.o
[ 36%] Built target c10_tempfile_test
[ 36%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/u8clamp/sse2.c.o
[ 36%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/u8maxpool/16x9p8q-sse2.c.o
[ 36%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/u8maxpool/sub16-sse2.c.o
[ 36%] Linking CXX executable ../../bin/c10_typeid_test
[ 36%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/u8rmax/sse2.c.o
[ 36%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/x8zip/x2-sse2.c.o
[ 36%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/ref_lrn.cpp.o
[ 36%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/x8zip/x3-sse2.c.o
[ 36%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/ref_pooling.cpp.o
[ 36%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/x8zip/x4-sse2.c.o
[ 36%] Built target c10_typeid_test
[ 36%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_enum.cc.o
[ 36%] Building C object confu-deps/QNNPACK/CMakeFiles/qnnpack.dir/src/x8zip/xm-sse2.c.o
[ 36%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/ref_shuffle.cpp.o
[ 36%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_enum_field.cc.o
[ 36%] Linking C static library ../../lib/libqnnpack.a
[ 36%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_extension.cc.o
[ 36%] Built target qnnpack
Scanning dependencies of target c10_InlineDeviceGuard_test
[ 36%] Building CXX object c10/test/CMakeFiles/c10_InlineDeviceGuard_test.dir/core/impl/InlineDeviceGuard_test.cpp.o
Scanning dependencies of target c10_either_test
[ 36%] Building CXX object c10/test/CMakeFiles/c10_either_test.dir/util/either_test.cpp.o
[ 36%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_field.cc.o
Scanning dependencies of target c10_bfloat16_test
[ 36%] Building CXX object c10/test/CMakeFiles/c10_bfloat16_test.dir/util/bfloat16_test.cpp.o
Scanning dependencies of target c10_registry_test
[ 36%] Building CXX object c10/test/CMakeFiles/c10_registry_test.dir/util/registry_test.cpp.o
[ 36%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_file.cc.o
[ 36%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/rnn/cell_common.cpp.o
[ 37%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/ref_softmax.cpp.o
[ 37%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_generator.cc.o
[ 37%] Linking CXX executable ../../bin/c10_bfloat16_test
[ 37%] Built target c10_bfloat16_test
[ 37%] Linking CXX executable ../../bin/c10_InlineDeviceGuard_test
[ 37%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_helpers.cc.o
[ 37%] Built target c10_InlineDeviceGuard_test
[ 37%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_map_field.cc.o
[ 37%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_message.cc.o
[ 37%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/rnn/cell_gru.cpp.o
[ 37%] Linking CXX executable ../../bin/c10_registry_test
[ 37%] Built target c10_registry_test
[ 37%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/rnn/cell_gru_lbr.cpp.o
Scanning dependencies of target c10_TensorTypeId_test
[ 37%] Building CXX object c10/test/CMakeFiles/c10_TensorTypeId_test.dir/core/TensorTypeId_test.cpp.o
Scanning dependencies of target c10_StreamGuard_test
[ 37%] Building CXX object c10/test/CMakeFiles/c10_StreamGuard_test.dir/core/StreamGuard_test.cpp.o
[ 37%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_message_field.cc.o
[ 38%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_padding_optimizer.cc.o
[ 38%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_primitive_field.cc.o
[ 38%] Linking CXX executable ../../bin/c10_TensorTypeId_test
[ 38%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/rnn/cell_lstm.cpp.o
[ 38%] Built target c10_TensorTypeId_test
[ 38%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/rnn/cell_rnn.cpp.o
Scanning dependencies of target c10_DeviceGuard_test
[ 38%] Building CXX object c10/test/CMakeFiles/c10_DeviceGuard_test.dir/core/DeviceGuard_test.cpp.o
[ 39%] Linking CXX executable ../../bin/c10_StreamGuard_test
Scanning dependencies of target c10_Half_test
[ 39%] Built target c10_StreamGuard_test
[ 39%] Building CXX object c10/test/CMakeFiles/c10_Half_test.dir/util/Half_test.cpp.o
Scanning dependencies of target c10_LeftRight_test
[ 39%] Building CXX object c10/test/CMakeFiles/c10_LeftRight_test.dir/util/LeftRight_test.cpp.o
[ 39%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/rnn/ref_rnn.cpp.o
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_service.cc.o
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/cpp/cpp_string_field.cc.o
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_doc_comment.cc.o
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_enum.cc.o
[ 39%] Linking CXX executable ../../bin/c10_DeviceGuard_test
[ 39%] Linking CXX executable ../../bin/c10_Half_test
[ 39%] Built target c10_DeviceGuard_test
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_enum_field.cc.o
[ 39%] Built target c10_Half_test
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_field_base.cc.o
[ 39%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/rnn/rnn_utils.cpp.o
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_generator.cc.o
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_helpers.cc.o
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_map_field.cc.o
[ 39%] Generating src/x86_64-fma/2d-winograd-8x8-3x3.py.o
[ 39%] Linking CXX executable ../../bin/c10_LeftRight_test
[ 39%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/simple_concat.cpp.o
[ 39%] Built target c10_LeftRight_test
[ 39%] Building CXX object third_party/ideep/mkl-dnn/src/CMakeFiles/mkldnn.dir/cpu/simple_sum.cpp.o
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_message.cc.o
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_message_field.cc.o
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_primitive_field.cc.o
Scanning dependencies of target c10_Array_test
[ 39%] Building CXX object c10/test/CMakeFiles/c10_Array_test.dir/util/Array_test.cpp.o
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_reflection_class.cc.o
Scanning dependencies of target c10_TypeList_test
Scanning dependencies of target c10_Metaprogramming_test
[ 39%] Building CXX object c10/test/CMakeFiles/c10_TypeList_test.dir/util/TypeList_test.cpp.o
[ 39%] Building CXX object c10/test/CMakeFiles/c10_Metaprogramming_test.dir/util/Metaprogramming_test.cpp.o
[ 39%] Generating src/x86_64-fma/blas/s8gemm.py.o
[ 39%] Linking CXX executable ../../bin/c10_either_test
[ 39%] Built target c10_either_test
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_repeated_enum_field.cc.o
Scanning dependencies of target c10_InlineStreamGuard_test
[ 39%] Generating src/x86_64-fma/blas/c8gemm.py.o
[ 39%] Building CXX object c10/test/CMakeFiles/c10_InlineStreamGuard_test.dir/core/impl/InlineStreamGuard_test.cpp.o
Scanning dependencies of target c10_flags_test
[ 39%] Building CXX object c10/test/CMakeFiles/c10_flags_test.dir/util/flags_test.cpp.o
Scanning dependencies of target c10_TypeTraits_test
[ 39%] Building CXX object c10/test/CMakeFiles/c10_TypeTraits_test.dir/util/TypeTraits_test.cpp.o
[ 39%] Linking CXX executable ../../bin/c10_Array_test
[ 39%] Built target c10_Array_test
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_repeated_message_field.cc.o
Scanning dependencies of target c10_intrusive_ptr_test
[ 39%] Building CXX object c10/test/CMakeFiles/c10_intrusive_ptr_test.dir/util/intrusive_ptr_test.cpp.o
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_repeated_primitive_field.cc.o
[ 39%] Linking CXX static library ../../../../lib/libmkldnn.a
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_source_generator_base.cc.o
[ 39%] Generating src/x86_64-fma/blas/s4c6gemm.py.o
[ 39%] Linking CXX executable ../../bin/c10_TypeList_test
[ 39%] Built target c10_TypeList_test
[ 39%] Built target mkldnn
Scanning dependencies of target c10_logging_test
Scanning dependencies of target c10_hip_HIPTest
[ 39%] Building CXX object c10/test/CMakeFiles/c10_logging_test.dir/util/logging_test.cpp.o
[ 39%] Building CXX object c10/hip/test/CMakeFiles/c10_hip_HIPTest.dir/impl/HIPTest.cpp.o
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/csharp/csharp_wrapper_field.cc.o
[ 39%] Linking CXX executable ../../bin/c10_Metaprogramming_test
[ 39%] Linking CXX executable ../../bin/c10_TypeTraits_test
[ 39%] Built target c10_Metaprogramming_test
[ 39%] Linking CXX executable ../../bin/c10_flags_test
[ 39%] Built target c10_TypeTraits_test
Scanning dependencies of target dispavx_obj
Scanning dependencies of target sleefavx512f
[ 39%] Building C object sleef/src/libm/CMakeFiles/dispavx_obj.dir/dispavx.c.o
[ 39%] Building C object sleef/src/libm/CMakeFiles/sleefavx512f.dir/sleefsimdsp.c.o
[ 39%] Generating src/x86_64-fma/blas/conv1x1.py.o
/root/pytorch/c10/test/util/intrusive_ptr_test.cpp:319:8: warning: explicitly assigning value of variable of type 'intrusive_ptr<(anonymous namespace)::SomeClass>' (aka 'intrusive_ptr<(anonymous namespace)::SomeClass0Parameters>') to itself [-Wself-assign-overloaded]
obj1 = obj1;
 ~~~~ ^ ~~~~
/root/pytorch/c10/test/util/intrusive_ptr_test.cpp:325:8: warning: explicitly assigning value of variable of type 'intrusive_ptr<(anonymous namespace)::SomeClass>' (aka 'intrusive_ptr<(anonymous namespace)::SomeClass0Parameters>') to itself [-Wself-assign-overloaded]
obj1 = obj1;
 ~~~~ ^ ~~~~
/root/pytorch/c10/test/util/intrusive_ptr_test.cpp:333:8: warning: explicitly assigning value of variable of type 'intrusive_ptr<(anonymous namespace)::SomeClass>' (aka 'intrusive_ptr<(anonymous namespace)::SomeClass0Parameters>') to itself [-Wself-assign-overloaded]
obj1 = obj1;
 ~~~~ ^ ~~~~
[ 39%] Built target c10_flags_test
[ 39%] Building C object sleef/src/libm/CMakeFiles/sleefavx512f.dir/sleefsimddp.c.o
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_context.cc.o
/root/pytorch/c10/test/util/intrusive_ptr_test.cpp:1921:8: warning: explicitly assigning value of variable of type 'weak_intrusive_ptr<(anonymous namespace)::SomeClass>' (aka 'weak_intrusive_ptr<(anonymous namespace)::SomeClass0Parameters>') to itself [-Wself-assign-overloaded]
obj1 = obj1;
 ~~~~ ^ ~~~~
/root/pytorch/c10/test/util/intrusive_ptr_test.cpp:1950:8: warning: explicitly assigning value of variable of type 'weak_intrusive_ptr<(anonymous namespace)::SomeClass>' (aka 'weak_intrusive_ptr<(anonymous namespace)::SomeClass0Parameters>') to itself [-Wself-assign-overloaded]
obj1 = obj1;
 ~~~~ ^ ~~~~
/root/pytorch/c10/test/util/intrusive_ptr_test.cpp:1959:8: warning: explicitly assigning value of variable of type 'weak_intrusive_ptr<(anonymous namespace)::SomeClass>' (aka 'weak_intrusive_ptr<(anonymous namespace)::SomeClass0Parameters>') to itself [-Wself-assign-overloaded]
obj1 = obj1;
 ~~~~ ^ ~~~~
[ 39%] Generating src/x86_64-fma/blas/sgemm.py.o
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_doc_comment.cc.o
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_enum.cc.o
[ 39%] Linking CXX executable ../../../bin/c10_hip_HIPTest
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_enum_field.cc.o
[ 39%] Linking CXX executable ../../bin/c10_InlineStreamGuard_test
[ 39%] Built target c10_hip_HIPTest
[ 39%] Built target dispavx_obj
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_enum_field_lite.cc.o
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_enum_lite.cc.o
[ 39%] Built target sleefavx512f
Scanning dependencies of target sleef
[ 39%] Built target c10_InlineStreamGuard_test
[ 39%] Building C object sleef/src/libm/CMakeFiles/sleef.dir/sleefsp.c.o
[ 39%] Building C object sleef/src/libm/CMakeFiles/sleef.dir/sleefdp.c.o
[ 39%] Generating src/x86_64-fma/max-pooling.py.o
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_extension.cc.o
[ 39%] Linking CXX executable ../../bin/c10_logging_test
[ 39%] Generating src/x86_64-fma/relu.py.o
[ 39%] Generating src/x86_64-fma/softmax.py.o
[ 39%] Built target c10_logging_test
[ 39%] Building C object sleef/src/libm/CMakeFiles/sleef.dir/sleefld.c.o
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_extension_lite.cc.o
[ 39%] Building C object sleef/src/libm/CMakeFiles/sleef.dir/sleefqp.c.o
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_field.cc.o
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_file.cc.o
[ 39%] Linking C static library ../../lib/libsleef.a
[ 39%] Built target sleef
[ 39%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_generator.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_generator_factory.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_helpers.cc.o
[ 40%] Generating src/x86_64-fma/blas/sdotxf.py.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_lazy_message_field.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_lazy_message_field_lite.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_map_field.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_map_field_lite.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_message.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_message_builder.cc.o
[ 40%] Generating src/x86_64-fma/blas/shdotxf.py.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_message_builder_lite.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_message_field.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_message_field_lite.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_message_lite.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_name_resolver.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_primitive_field.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_primitive_field_lite.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_service.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_string_field.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_shared_code_generator.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/java/java_string_field_lite.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/js/js_generator.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/js/well_known_types_embed.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_enum.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_enum_field.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_field.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_file.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_extension.cc.o
Scanning dependencies of target nnpack
[ 40%] Building C object confu-deps/NNPACK/CMakeFiles/nnpack.dir/src/init.c.o
[ 40%] Building C object confu-deps/NNPACK/CMakeFiles/nnpack.dir/src/convolution-inference.c.o
[ 40%] Building C object confu-deps/NNPACK/CMakeFiles/nnpack.dir/src/fully-connected-inference.c.o
[ 40%] Building C object confu-deps/NNPACK/CMakeFiles/nnpack.dir/src/pooling-output.c.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_generator.cc.o
[ 40%] Building C object confu-deps/NNPACK/CMakeFiles/nnpack.dir/src/relu-output.c.o
[ 40%] Building C object confu-deps/NNPACK/CMakeFiles/nnpack.dir/src/softmax-output.c.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_helpers.cc.o
[ 40%] Building C object confu-deps/NNPACK/CMakeFiles/nnpack.dir/src/fully-connected-output.c.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_map_field.cc.o
[ 40%] Building C object confu-deps/NNPACK/CMakeFiles/nnpack.dir/src/relu-input-gradient.c.o
[ 40%] Building C object confu-deps/NNPACK/CMakeFiles/nnpack.dir/src/convolution-input-gradient.c.o
[ 40%] Building C object confu-deps/NNPACK/CMakeFiles/nnpack.dir/src/convolution-kernel-gradient.c.o
[ 40%] Building C object confu-deps/NNPACK/CMakeFiles/nnpack.dir/src/convolution-output.c.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_message.cc.o
[ 40%] Building C object confu-deps/NNPACK/CMakeFiles/nnpack.dir/src/x86_64-fma/softmax.c.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_message_field.cc.o
[ 40%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_oneof.cc.o
[ 40%] Linking C static library ../../lib/libnnpack.a
[ 40%] Built target nnpack
[ 41%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/objectivec/objectivec_primitive_field.cc.o
[ 41%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/php/php_generator.cc.o
[ 41%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/plugin.cc.o
[ 41%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/plugin.pb.cc.o
[ 41%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/python/python_generator.cc.o
[ 41%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/ruby/ruby_generator.cc.o
[ 41%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/subprocess.cc.o
[ 41%] Building CXX object third_party/protobuf/cmake/CMakeFiles/libprotoc.dir/__/src/google/protobuf/compiler/zip_writer.cc.o
Scanning dependencies of target gloo_hip
[ 41%] Linking CXX static library ../../../lib/libgloo_hip.a
[ 41%] Built target gloo_hip
[ 41%] Linking CXX static library ../../../lib/libprotoc.a
[ 41%] Built target libprotoc
Scanning dependencies of target protoc
[ 41%] Building CXX object third_party/protobuf/cmake/CMakeFiles/protoc.dir/__/src/google/protobuf/compiler/main.cc.o
[ 41%] Linking CXX executable ../../../bin/protoc
[ 41%] Built target protoc
[ 41%] Running C++/Python protocol buffer compiler on /root/pytorch/caffe2/proto/torch.proto
Scanning dependencies of target gen_onnx_proto
[ 41%] Running C++/Python protocol buffer compiler on /root/pytorch/caffe2/proto/caffe2_legacy.proto
[ 41%] Running C++/Python protocol buffer compiler on /root/pytorch/caffe2/proto/prof_dag.proto
[ 41%] Running C++/Python protocol buffer compiler on /root/pytorch/caffe2/proto/caffe2.proto
[ 41%] Running C++/Python protocol buffer compiler on /root/pytorch/caffe2/proto/hsm.proto
[ 41%] Running C++/Python protocol buffer compiler on /root/pytorch/caffe2/proto/predictor_consts.proto
[ 41%] Running C++/Python protocol buffer compiler on /root/pytorch/caffe2/proto/metanet.proto
[ 41%] Running gen_proto.py on onnx/onnx.in.proto
Processing /root/pytorch/third_party/onnx/onnx/onnx.in.proto
Writing /root/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto
Writing /root/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto3
Writing /root/pytorch/build/third_party/onnx/onnx/onnx-ml.pb.h
generating /root/pytorch/build/third_party/onnx/onnx/onnx_pb.py
[ 41%] Running C++ protocol buffer compiler on /root/pytorch/build/third_party/onnx/onnx/onnx_onnx_torch-ml.proto
Scanning dependencies of target Caffe2_PROTO
[ 41%] Building CXX object caffe2/proto/CMakeFiles/Caffe2_PROTO.dir/caffe2.pb.cc.o
[ 41%] Building CXX object caffe2/proto/CMakeFiles/Caffe2_PROTO.dir/caffe2_legacy.pb.cc.o
[ 41%] Building CXX object caffe2/proto/CMakeFiles/Caffe2_PROTO.dir/hsm.pb.cc.o
[ 41%] Building CXX object caffe2/proto/CMakeFiles/Caffe2_PROTO.dir/metanet.pb.cc.o
[ 41%] Building CXX object caffe2/proto/CMakeFiles/Caffe2_PROTO.dir/prof_dag.pb.cc.o
[ 41%] Building CXX object caffe2/proto/CMakeFiles/Caffe2_PROTO.dir/torch.pb.cc.o
[ 41%] Building CXX object caffe2/proto/CMakeFiles/Caffe2_PROTO.dir/predictor_consts.pb.cc.o
[ 41%] Built target gen_onnx_proto
[ 41%] Running gen_proto.py on onnx/onnx-operators.in.proto
Processing /root/pytorch/third_party/onnx/onnx/onnx-operators.in.proto
Writing /root/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto
Writing /root/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto3
Writing /root/pytorch/build/third_party/onnx/onnx/onnx-operators-ml.pb.h
generating /root/pytorch/build/third_party/onnx/onnx/onnx_operators_pb.py
[ 41%] Running C++ protocol buffer compiler on /root/pytorch/build/third_party/onnx/onnx/onnx-operators_onnx_torch-ml.proto
Scanning dependencies of target onnx_proto
[ 41%] Building CXX object third_party/onnx/CMakeFiles/onnx_proto.dir/onnx/onnx_onnx_torch-ml.pb.cc.o
[ 41%] Building CXX object third_party/onnx/CMakeFiles/onnx_proto.dir/onnx/onnx-operators_onnx_torch-ml.pb.cc.o
6 warnings generated.
[ 41%] Linking CXX executable ../../bin/c10_intrusive_ptr_test
[ 41%] Built target c10_intrusive_ptr_test
[ 41%] Built target Caffe2_PROTO
Scanning dependencies of target caffe2_protos
Scanning dependencies of target Caffe2_perfkernels_avx512
[ 41%] Linking CXX static library ../lib/libcaffe2_protos.a
Scanning dependencies of target caffe2_dnnlowp_avx2_ops
[ 41%] Building CXX object caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx512.dir/common_avx512.cc.o
[ 41%] Building CXX object caffe2/quantization/server/CMakeFiles/caffe2_dnnlowp_avx2_ops.dir/pool_dnnlowp_op_avx2.cc.o
Scanning dependencies of target Caffe2_perfkernels_avx
[ 42%] Building CXX object caffe2/quantization/server/CMakeFiles/caffe2_dnnlowp_avx2_ops.dir/elementwise_sum_dnnlowp_op_avx2.cc.o
[ 42%] Building CXX object caffe2/quantization/server/CMakeFiles/caffe2_dnnlowp_avx2_ops.dir/fully_connected_fake_lowp_op_avx2.cc.o
[ 42%] Building CXX object caffe2/quantization/server/CMakeFiles/caffe2_dnnlowp_avx2_ops.dir/group_norm_dnnlowp_op_avx2.cc.o
[ 42%] Building CXX object caffe2/quantization/server/CMakeFiles/caffe2_dnnlowp_avx2_ops.dir/relu_dnnlowp_op_avx2.cc.o
[ 42%] Building CXX object caffe2/quantization/server/CMakeFiles/caffe2_dnnlowp_avx2_ops.dir/transpose.cc.o
[ 42%] Building CXX object caffe2/quantization/server/CMakeFiles/caffe2_dnnlowp_avx2_ops.dir/spatial_batch_norm_dnnlowp_op_avx2.cc.o
[ 42%] Building CXX object caffe2/quantization/server/CMakeFiles/caffe2_dnnlowp_avx2_ops.dir/norm_minimization_avx2.cc.o
[ 42%] Building CXX object caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx.dir/common_avx.cc.o
[ 42%] Building CXX object caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx.dir/adagrad_avx.cc.o
Scanning dependencies of target Caffe2_perfkernels_avx2
[ 42%] Building CXX object caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx2.dir/common_avx2.cc.o
[ 42%] Built target caffe2_protos
[ 42%] Building CXX object caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx2.dir/embedding_lookup_avx2.cc.o
[ 42%] Building CXX object caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx.dir/typed_axpy_avx.cc.o
[ 42%] Building CXX object caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx2.dir/embedding_lookup_fused_8bit_rowwise_avx2.cc.o
[ 42%] Linking CXX static library ../../lib/libonnx_proto.a
[ 42%] Built target onnx_proto
[ 42%] Building CXX object caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx2.dir/math_cpu_avx2.cc.o
[ 42%] Building CXX object caffe2/perfkernels/CMakeFiles/Caffe2_perfkernels_avx2.dir/typed_axpy_avx2.cc.o
[ 43%] Linking CXX static library ../../lib/libCaffe2_perfkernels_avx512.a
[ 43%] Built target Caffe2_perfkernels_avx512
[ 43%] Built target caffe2_dnnlowp_avx2_ops
Scanning dependencies of target onnx
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/checker.cc.o
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/__/__/caffe2/onnx/torch_ops/schema.cc.o
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/__/__/caffe2/onnx/torch_ops/defs.cc.o
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/common/interned_strings.cc.o
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/common/assertions.cc.o
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/common/model_helpers.cc.o
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/common/ir_pb_converter.cc.o
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/common/status.cc.o
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/attr_proto_util.cc.o
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/controlflow/defs.cc.o
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/controlflow/old.cc.o
[ 43%] Linking CXX static library ../../lib/libCaffe2_perfkernels_avx.a
[ 43%] Built target Caffe2_perfkernels_avx
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/data_type_utils.cc.o
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/function.cc.o
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/generator/defs.cc.o
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/generator/old.cc.o
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/logical/defs.cc.o
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/logical/old.cc.o
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/math/defs.cc.o
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/math/old.cc.o
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/nn/defs.cc.o
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/nn/old.cc.o
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/object_detection/defs.cc.o
[ 43%] Linking CXX static library ../../lib/libCaffe2_perfkernels_avx2.a
[ 43%] Built target Caffe2_perfkernels_avx2
[ 43%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/quantization/defs.cc.o
[ 44%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/reduction/defs.cc.o
[ 44%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/rnn/defs.cc.o
[ 44%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/rnn/old.cc.o
[ 44%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/schema.cc.o
[ 44%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/tensor/defs.cc.o
[ 44%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/tensor/old.cc.o
[ 44%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/tensor/utils.cc.o
[ 44%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/tensor_proto_util.cc.o
[ 44%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/traditionalml/defs.cc.o
[ 44%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/defs/traditionalml/old.cc.o
[ 44%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/onnxifi_utils.cc.o
[ 44%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/optimizer/optimize.cc.o
[ 44%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/optimizer/pass.cc.o
[ 44%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/optimizer/pass_manager.cc.o
[ 44%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/optimizer/pass_registry.cc.o
[ 44%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/shape_inference/implementation.cc.o
[ 44%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/version_converter/convert.cc.o
[ 44%] Building CXX object third_party/onnx/CMakeFiles/onnx.dir/onnx/version_converter/helper.cc.o
[ 44%] Linking CXX static library ../../lib/libonnx.a
[ 44%] Built target onnx
[ 44%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/torch_generated_THHBlas.hip.o
[ 44%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/torch_generated_THHReduceApplyUtils.hip.o
[ 44%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/torch_generated_THHSleep.hip.o
[ 44%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/torch_generated_THHStorageCopy.hip.o
[ 44%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/torch_generated_THHStorage.hip.o
[ 44%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/torch_generated_THHTensor.hip.o
[ 44%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/torch_generated_THHTensorCopy.hip.o
[ 44%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/torch_generated_THHTensorMathBlas.hip.o
[ 44%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/torch_generated_THHTensorMathPairwise.hip.o
[ 44%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/torch_generated_THHTensorMath.hip.o
[ 44%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/torch_generated_THHTensorMathMagma.hip.o
[ 44%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/torch_generated_THHTensorMathScan.hip.o
[ 44%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/torch_generated_THHTensorMathReduce.hip.o
[ 44%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/torch_generated_THHTensorIndex.hip.o
[ 44%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/torch_generated_THHTensorRandom.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/torch_generated_THHTensorScatterGather.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/torch_generated_THHTensorTopK.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/torch_generated_THHTensorSort.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/torch_generated_THHSortUtils.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/torch_generated_THHTensorMode.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorSortByte.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathCompareTByte.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathPointwiseByte.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathCompareByte.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathReduceByte.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMaskedByte.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorSortChar.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathCompareTChar.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathPointwiseChar.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathCompareChar.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathReduceChar.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMaskedChar.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorSortShort.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathCompareTShort.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathPointwiseShort.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathCompareShort.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathReduceShort.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMaskedShort.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorSortInt.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathCompareTInt.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathPointwiseInt.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathCompareInt.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathReduceInt.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMaskedInt.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorSortLong.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathCompareTLong.hip.o
[ 45%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathPointwiseLong.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathCompareLong.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathReduceLong.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMaskedLong.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorSortHalf.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathCompareTHalf.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathPointwiseHalf.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathCompareHalf.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathReduceHalf.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMaskedHalf.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorSortFloat.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathCompareTFloat.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathPointwiseFloat.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathCompareFloat.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathReduceFloat.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMaskedFloat.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorSortDouble.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathCompareTDouble.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathPointwiseDouble.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathCompareDouble.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathReduceDouble.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMaskedDouble.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathCompareTBool.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathCompareBool.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathReduceBool.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMaskedBool.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THH/generated/torch_generated_THHTensorMathPointwiseBool.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_AbsCriterion.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_Abs.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_BCECriterion.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_ClassNLLCriterion.hip.o
[ 46%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_DistKLDivCriterion.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_ELU.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_FeatureLPPooling.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_GatedLinearUnit.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_HardTanh.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_IndexLinear.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_L1Cost.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_LeakyReLU.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_LogSigmoid.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_LookupTableBag.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_LookupTable.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_MarginCriterion.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_MSECriterion.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_MultiLabelMarginCriterion.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_MultiMarginCriterion.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_RReLU.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_Sigmoid.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_SmoothL1Criterion.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_SoftMarginCriterion.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_SoftPlus.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_SoftShrink.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_SpatialClassNLLCriterion.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_SpatialConvolutionLocal.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_SpatialConvolutionMM.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_SpatialCrossMapLRN.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_SpatialDepthwiseConvolution.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_SpatialSubSampling.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_Sqrt.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_Square.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_Tanh.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_TemporalConvolution.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_TemporalMaxPooling.hip.o
[ 47%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_TemporalRowConvolution.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/THHUNN/torch_generated_VolumetricConvolution.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/hip/detail/torch_generated_IndexUtils.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_Activation.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_AdaptiveAveragePooling.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_AdaptiveAveragePooling3d.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_AdaptiveMaxPooling2d.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_AdaptiveMaxPooling3d.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_AveragePool2d.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_AveragePool3d.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_BatchLinearAlgebra.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_BinaryOpsKernel.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_Col2Im.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_ConvolutionTranspose2d.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_ConvolutionTranspose3d.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_Copy.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_CrossKernel.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_DilatedConvolution.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_DilatedMaxPool2d.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_DilatedMaxPool3d.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_DistanceKernel.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_Distributions.hip.o
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
void triu_tril_kernel(
^
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/BatchLinearAlgebra.hip:962:6: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
32 warnings generated.
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_Dropout.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_Embedding.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_EmbeddingBackwardKernel.hip.o
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_EmbeddingBag.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_FillKernel.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_FractionalMaxPool2d.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_FractionalMaxPool3d.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_GridSampler.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_HIPScalar.hip.o
[ 48%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_Im2Col.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_IndexKernel.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_Indexing.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_Lerp.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_LinearAlgebra.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_Loss.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_LossCTC.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_MaxUnpooling.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_Normalization.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_RNN.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_RangeFactories.hip.o
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:106:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList output_size) {
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:187:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList output_size) {
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:197:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList output_size,
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:198:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList stride,
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:199:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList padding) {
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:272:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList output_size,
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:273:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList stride,
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:274:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList padding) {
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:364:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList output_size,
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:365:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList stride,
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:366:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList padding) {
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:378:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList output_size) {
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:469:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList output_size) {
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:481:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList output_size,
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:482:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList stride,
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:483:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList padding) {
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:580:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList output_size,
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:581:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList stride,
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:582:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList padding) {
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
19 warnings generated.
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_Reduce.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_ReduceOpsKernel.hip.o
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:106:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList output_size) {
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:187:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList output_size) {
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:197:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList output_size,
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:198:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList stride,
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:199:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList padding) {
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:272:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList output_size,
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:273:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList stride,
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:274:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList padding) {
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:364:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList output_size,
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:365:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList stride,
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:366:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList padding) {
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:378:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList output_size) {
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:469:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList output_size) {
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:481:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList output_size,
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:482:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList stride,
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:483:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList padding) {
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:580:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList output_size,
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:581:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList stride,
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
/root/pytorch/aten/src/ATen/native/hip/MaxUnpooling.hip:582:5: warning: 'IntList' is deprecated [-Wdeprecated-declarations]
IntList padding) {
^
/root/pytorch/c10/util/ArrayRef.h:278:1: note: 'IntList' has been explicitly marked deprecated here
C10_DEFINE_DEPRECATED_USING(IntList, ArrayRef<int64_t>)
^
/root/pytorch/c10/util/Deprecated.h:55:77: note: expanded from macro 'C10_DEFINE_DEPRECATED_USING'
# define C10_DEFINE_DEPRECATED_USING(TypeName, TypeThingy) using TypeName [[deprecated]] = TypeThingy;
^
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_ReflectionPad.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_Repeat.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_ReplicationPadding.hip.o
19 warnings generated.
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_Resize.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_SoftMax.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_SortingKthValue.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_SparseMM.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_SpectralOps.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_SummaryOps.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_TensorCompare.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_TensorFactories.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_TensorTransformations.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_UnaryOpsKernel.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_Unique.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_UpSampleBicubic2d.hip.o
In file included from /root/pytorch/aten/src/ATen/native/hip/SoftMax.hip:15:
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
__global__ void softmax_warp_backward(output_t *gradInput, const input_t *grad, const input_t *output, int batch_size, int stride, int element_count)
^
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
__global__ void softmax_warp_forward(output_t *dst, const input_t *src, int batch_size, int stride, int element_count)
^
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:65:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
__global__ void softmax_warp_backward(output_t *gradInput, const input_t *grad, const input_t *output, int batch_size, int stride, int element_count)
^
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
/root/pytorch/aten/src/ATen/native/hip/PersistentSoftmax.cuh:156:17: warning: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering [-Wpass-failed=transform-warning]
96 warnings generated.
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_UpSampleBilinear2d.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_UpSampleLinear1d.hip.o
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
warning: <unknown>:0:0: loop not unrolled: the optimizer was unable to perform the requested transformation; the transformation might be disabled or specified as part of an unsupported transformation ordering
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_UpSampleNearest1d.hip.o
[ 50%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_UpSampleNearest2d.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_UpSampleNearest3d.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_UpSampleTrilinear3d.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/hip/torch_generated_WeightNorm.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/sparse/hip/torch_generated_SparseHIPBlas.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/sparse/hip/torch_generated_SparseHIPTensor.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/sparse/hip/torch_generated_SparseHIPTensorMath.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/quantized/hip/torch_generated_fake_quantize_per_tensor_affine.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/core/hip/torch_generated_common_miopen.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/core/hip/torch_generated_context_gpu.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/utils/math/hip/torch_generated_broadcast.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/utils/math/hip/torch_generated_elementwise.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/utils/math/hip/torch_generated_reduce.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/utils/math/hip/torch_generated_transpose.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/utils/hip/torch_generated_math_gpu.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/image/hip/torch_generated_transform_gpu.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_abs_op.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_accumulate_op.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_accuracy_op.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_acos_op.hip.o
In file included from /root/pytorch/caffe2/operators/hip/abs_op.hip:2:
In file included from /root/pytorch/caffe2/operators/abs_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_affine_channel_op.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_arg_ops.hip.o
1 warning generated.
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_asin_op.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_assert_op.hip.o
In file included from /root/pytorch/caffe2/operators/hip/acos_op.hip:2:
In file included from /root/pytorch/caffe2/operators/acos_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_atan_op.hip.o
1 warning generated.
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_batch_gather_ops.hip.o
In file included from /root/pytorch/caffe2/operators/hip/asin_op.hip:2:
In file included from /root/pytorch/caffe2/operators/asin_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_batch_matmul_op.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_batch_moments_op.hip.o
1 warning generated.
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_boolean_mask_ops.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_boolean_unmask_ops.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_bucketize_op.hip.o
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_cast_op.hip.o
In file included from /root/pytorch/caffe2/operators/hip/atan_op.hip:2:
In file included from /root/pytorch/caffe2/operators/atan_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
1 warning generated.
[ 51%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_cbrt_op.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_ceil_op.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_channel_backprop_stats_op.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_channel_shuffle_op.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_channel_stats_op.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_clip_op.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_conv_op_miopen.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_conv_transpose_op_miopen.hip.o
In file included from /root/pytorch/caffe2/operators/hip/cbrt_op.hip:2:
In file included from /root/pytorch/caffe2/operators/cbrt_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_copy_op.hip.o
1 warning generated.
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_cos_op.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_cosh_op.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_cosine_embedding_criterion_op.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_cross_entropy_op.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_cube_op.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_data_couple_gpu.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_deform_conv_op.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_distance_op.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_dropout_op.hip.o
In file included from /root/pytorch/caffe2/operators/hip/cos_op.hip:2:
In file included from /root/pytorch/caffe2/operators/cos_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
1 warning generated.
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_elementwise_div_op.hip.o
In file included from /root/pytorch/caffe2/operators/hip/cosh_op.hip:2:
In file included from /root/pytorch/caffe2/operators/cosh_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
1 warning generated.
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_elementwise_linear_op.hip.o
In file included from /root/pytorch/caffe2/operators/hip/cube_op.hip:2:
In file included from /root/pytorch/caffe2/operators/cube_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_elementwise_mul_op.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_elementwise_ops.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_elu_op.hip.o
1 warning generated.
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_elu_op_miopen.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_enforce_finite_op.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_ensure_cpu_output_op.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_erf_op.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_filler_op.hip.o
In file included from /root/pytorch/caffe2/operators/hip/elementwise_div_op.hip:2:
In file included from /root/pytorch/caffe2/operators/elementwise_div_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
In file included from /root/pytorch/caffe2/operators/hip/elementwise_ops.hip:2:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
In file included from /root/pytorch/caffe2/operators/hip/elu_op.hip:2:
In file included from /root/pytorch/caffe2/operators/elu_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
In file included from /root/pytorch/caffe2/operators/hip/elu_op_miopen.hip:1:
In file included from /root/pytorch/caffe2/operators/elu_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
In file included from /root/pytorch/caffe2/operators/hip/elementwise_mul_op.hip:2:
In file included from /root/pytorch/caffe2/operators/elementwise_mul_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_find_op.hip.o
1 warning generated.
1 warning generated.
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_floor_op.hip.o
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_gather_op.hip.o
1 warning generated.
[ 52%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_gelu_op.hip.o
1 warning generated.
1 warning generated.
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_generate_proposals_op.hip.o
In file included from /root/pytorch/caffe2/operators/hip/erf_op.hip:2:
In file included from /root/pytorch/caffe2/operators/erf_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_generate_proposals_op_util_nms_gpu.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_given_tensor_byte_string_to_uint8_fill_op.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_given_tensor_fill_op.hip.o
1 warning generated.
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_glu_op.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_group_norm_op.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_gru_unit_op_gpu.hip.o
In file included from /root/pytorch/caffe2/operators/hip/gelu_op.hip:2:
In file included from /root/pytorch/caffe2/operators/gelu_op.h:8:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_half_float_ops.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_hard_sigmoid_op.hip.o
1 warning generated.
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_instance_norm_op.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_integral_image_op.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_layer_norm_op.hip.o
In file included from /root/pytorch/caffe2/operators/hip/generate_proposals_op.hip:5:
In file included from /root/pytorch/caffe2/operators/generate_proposals_op.h:7:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_leaky_relu_op.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_lengths_pad_op.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_lengths_tile_op.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_local_response_normalization_op.hip.o
In file included from /root/pytorch/caffe2/operators/hip/hard_sigmoid_op.hip:2:
In file included from /root/pytorch/caffe2/operators/hard_sigmoid_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
1 warning generated.
1 warning generated.
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_local_response_normalization_op_miopen.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_logit_op.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_loss_op.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_lp_pool_op.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_lstm_unit_op_gpu.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_margin_ranking_criterion_op.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_max_pool_with_index.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_mean_op.hip.o
In file included from /root/pytorch/caffe2/operators/hip/logit_op.hip:2:
In file included from /root/pytorch/caffe2/operators/logit_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_mem_query_op.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_minmax_ops.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_moments_op.hip.o
1 warning generated.
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_multi_class_accuracy_op.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_normalize_ops.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_one_hot_ops.hip.o
[ 53%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_pack_segments.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_pad_op_gpu.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_perplexity_op.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_piecewise_linear_transform_op.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_pool_op.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_pool_op_miopen.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_pow_op.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_prelu_op.hip.o
In file included from /root/pytorch/caffe2/operators/hip/normalize_ops.hip:6:
In file included from /root/pytorch/caffe2/operators/normalize_op.h:6:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_reciprocal_op.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_reduce_front_back_max_ops.hip.o
1 warning generated.
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_reduce_front_back_sum_mean_ops.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_reduce_ops.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_reduction_ops.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_relu_n_op.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_relu_op.hip.o
In file included from /root/pytorch/caffe2/operators/hip/pow_op.hip:8:
In file included from /root/pytorch/caffe2/operators/pow_op.h:8:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
In file included from /root/pytorch/caffe2/operators/hip/reciprocal_op.hip:2:
In file included from /root/pytorch/caffe2/operators/reciprocal_op.h:4:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
1 warning generated.
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_relu_op_miopen.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_replace_nan_op.hip.o
1 warning generated.
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_resize_op.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_reverse_packed_segs_op.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_rmac_regions_op.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_roi_align_gradient_op.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_roi_align_op.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_roi_align_rotated_gradient_op.hip.o
In file included from /root/pytorch/caffe2/operators/hip/relu_n_op.hip:2:
In file included from /root/pytorch/caffe2/operators/relu_n_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
In file included from /root/pytorch/caffe2/operators/hip/relu_op.hip:2:
In file included from /root/pytorch/caffe2/operators/relu_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_roi_align_rotated_op.hip.o
1 warning generated.
1 warning generated.
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_roi_pool_op.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_rsqrt_op.hip.o
In file included from /root/pytorch/caffe2/operators/hip/relu_op_miopen.hip:1:
In file included from /root/pytorch/caffe2/operators/relu_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_scale_blobs_op.hip.o
1 warning generated.
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_segment_reduction_op_gpu.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_selu_op.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_sequence_ops.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_sigmoid_op.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_sigmoid_op_miopen.hip.o
[ 54%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_sin_op.hip.o
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_sinh_op.hip.o
In file included from /root/pytorch/caffe2/operators/hip/rsqrt_op.hip:2:
In file included from /root/pytorch/caffe2/operators/rsqrt_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_slice_op.hip.o
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_softmax_ops.hip.o
1 warning generated.
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_softplus_op.hip.o
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_softsign_op.hip.o
In file included from /root/pytorch/caffe2/operators/hip/segment_reduction_op_gpu.hip:7:
In file included from /root/pytorch/caffe2/operators/segment_reduction_op.h:8:
In file included from /root/pytorch/caffe2/operators/reducer_functors.h:9:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
In file included from /root/pytorch/caffe2/operators/hip/sigmoid_op.hip:2:
In file included from /root/pytorch/caffe2/operators/sigmoid_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
In file included from /root/pytorch/caffe2/operators/hip/sigmoid_op_miopen.hip:1:
In file included from /root/pytorch/caffe2/operators/sigmoid_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_space_batch_op_gpu.hip.o
In file included from /root/pytorch/caffe2/operators/hip/sin_op.hip:2:
In file included from /root/pytorch/caffe2/operators/sin_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_sparse_normalize_op_gpu.hip.o
1 warning generated.
1 warning generated.
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_sparse_to_dense_op.hip.o
In file included from /root/pytorch/caffe2/operators/hip/sinh_op.hip:2:
In file included from /root/pytorch/caffe2/operators/sinh_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_spatial_batch_norm_op.hip.o
1 warning generated.
1 warning generated.
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_spatial_batch_norm_op_miopen.hip.o
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_stump_func_op.hip.o
1 warning generated.
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_summarize_op.hip.o
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_swish_op.hip.o
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_tan_op.hip.o
In file included from /root/pytorch/caffe2/operators/hip/softsign_op.hip:2:
In file included from /root/pytorch/caffe2/operators/softsign_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
1 warning generated.
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_tanh_op.hip.o
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_tanh_op_miopen.hip.o
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_thresholded_relu_op.hip.o
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_tile_op.hip.o
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_top_k.hip.o
In file included from /root/pytorch/caffe2/operators/hip/spatial_batch_norm_op.hip:1:
In file included from /root/pytorch/caffe2/operators/spatial_batch_norm_op.h:12:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
In file included from /root/pytorch/caffe2/operators/hip/spatial_batch_norm_op_miopen.hip:20:
In file included from /root/pytorch/caffe2/operators/spatial_batch_norm_op.h:12:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
In file included from /root/pytorch/caffe2/operators/hip/swish_op.hip:2:
In file included from /root/pytorch/caffe2/operators/swish_op.h:4:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
In file included from /root/pytorch/caffe2/operators/hip/tan_op.hip:2:
In file included from /root/pytorch/caffe2/operators/tan_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
1 warning generated.
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_transpose_op.hip.o
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_unique_ops.hip.o
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_upsample_op.hip.o
1 warning generated.
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_utility_ops.hip.o
1 warning generated.
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/hip/torch_generated_weighted_sample_op.hip.o
In file included from /root/pytorch/caffe2/operators/hip/tanh_op_miopen.hip:1:
In file included from /root/pytorch/caffe2/operators/tanh_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
In file included from /root/pytorch/caffe2/operators/hip/tanh_op.hip:2:
In file included from /root/pytorch/caffe2/operators/tanh_op.h:6:
In file included from /root/pytorch/caffe2/operators/elementwise_ops.h:15:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/rnn/hip/torch_generated_recurrent_network_op_gpu.hip.o
1 warning generated.
1 warning generated.
1 warning generated.
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/operators/rnn/hip/torch_generated_recurrent_op_miopen.hip.o
In file included from /root/pytorch/caffe2/operators/hip/tile_op.hip:2:
In file included from /root/pytorch/caffe2/operators/tile_op.h:13:
In file included from /root/pytorch/caffe2/utils/eigen_utils.h:6:
In file included from /root/pytorch/cmake/../third_party/eigen/Eigen/Core:202:
/root/pytorch/cmake/../third_party/eigen/Eigen/src/Core/arch/GPU/PacketMathHalf.h:149:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/sgd/hip/torch_generated_adadelta_op_gpu.hip.o
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/sgd/hip/torch_generated_adagrad_op_gpu.hip.o
1 warning generated.
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/sgd/hip/torch_generated_adam_op_gpu.hip.o
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/sgd/hip/torch_generated_fp16_momentum_sgd_op.hip.o
[ 55%] Building HIPCC object caffe2/CMakeFiles/torch.dir/sgd/hip/torch_generated_fp32_momentum_sgd_op.hip.o
[ 56%] Building HIPCC object caffe2/CMakeFiles/torch.dir/sgd/hip/torch_generated_lars_op_gpu.hip.o
[ 56%] Building HIPCC object caffe2/CMakeFiles/torch.dir/sgd/hip/torch_generated_momentum_sgd_op_gpu.hip.o
[ 56%] Building HIPCC object caffe2/CMakeFiles/torch.dir/sgd/hip/torch_generated_rmsprop_op_gpu.hip.o
[ 56%] Building HIPCC object caffe2/CMakeFiles/torch.dir/sgd/hip/torch_generated_yellowfin_op_gpu.hip.o
Scanning dependencies of target torch
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/CPUGenerator.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/Context.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/DLConvertor.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/DynamicLibrary.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/Dimname.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/MemoryOverlap.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/ExpandUtils.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/NamedTensorUtils.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/NamedTensor.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/ParallelCommon.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/ParallelNative.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/ParallelNativeTBB.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/ParallelOpenMP.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/ParallelThreadPoolNative.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/SparseTensorImpl.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/TensorGeometry.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/TensorUtils.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/Utils.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/Version.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/cpu/FlushDenormal.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/detail/CPUGuardImpl.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/detail/CUDAHooksInterface.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/detail/HIPHooksInterface.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/ATenDispatch.cpp.o
/root/pytorch/aten/src/ATen/TensorUtils.cpp:344:18: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'unsigned long' [-Wsign-compare]
if (view_d == newshape.size() - 1) {
 ~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/ATenGeneral.cpp.o
[ 56%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/DeprecatedTypeProperties.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/DeprecatedTypePropertiesRegistry.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/Formatting.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/Generator.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/LegacyDeviceTypeInit.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/LegacyTypeDispatch.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/Range.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/Tensor.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/blob.cpp.o
1 warning generated.
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/context_base.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/dispatch/Dispatcher.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/dispatch/OperatorEntry.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/grad_mode.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/interned_strings.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/ivalue.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/op_registration/infer_schema.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/op_registration/op_registration.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/register_symbols.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/core/type.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/script/error_report.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/script/function_schema_parser.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/script/lexer.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/script/strtod.cpp.o
/root/pytorch/torch/csrc/jit/script/strtod.cpp:41:8: warning: unused function 'parse_inf_or_nan' [-Wunused-function]
double parse_inf_or_nan(const char *p, char **endptr)
 ^
1 warning generated.
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/script/schema_type_parser.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/torch/csrc/jit/source_range.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Activation.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/AdaptiveAveragePooling.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/AdaptiveAveragePooling3d.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/AdaptiveMaxPooling2d.cpp.o
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/AdaptiveMaxPooling3d.cpp.o
/root/pytorch/aten/src/ATen/core/op_registration/infer_schema.cpp:7:15: warning: unused function 'serialize_schema' [-Wunused-function]
std::string serialize_schema(const FunctionSchema& schema) {
 ^
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/AffineGridGenerator.cpp.o
1 warning generated.
[ 57%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/AveragePool2d.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/AveragePool3d.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/BatchLinearAlgebra.cpp.o
/root/pytorch/aten/src/ATen/native/Activation.cpp:120:11: warning: unused variable 'input_numel' [-Wunused-variable]
int64_t input_numel = input.numel();
 ^
/root/pytorch/aten/src/ATen/native/Activation.cpp:244:11: warning: unused variable 'input_numel' [-Wunused-variable]
int64_t input_numel = input.numel();
 ^
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/BinaryOps.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Col2Im.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/ConstantPadNd.cpp.o
In file included from /root/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp:7:
/root/pytorch/aten/src/ATen/native/LinearAlgebraUtils.h:146:24: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'long' [-Wsign-compare]
for (size_t i = 0; i < batch_size; i++) {
 ~ ^ ~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/LinearAlgebraUtils.h:217:27: warning: comparison of integers of different signs: 'std::vector::size_type' (aka 'unsigned long') and 'const int64_t' (aka 'const long') [-Wsign-compare]
TORCH_CHECK(perm.size() == ndim,
 ~~~~~~~~~~~ ^ ~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
In file included from /root/pytorch/aten/src/ATen/native/Col2Im.cpp:8:
/root/pytorch/aten/src/ATen/native/im2col_shape_check.h:169:11: warning: unused variable 'n_input_plane' [-Wunused-variable]
int64_t n_input_plane = input.size(dim_batch + 1);
 ^
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Convolution.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/ConvolutionTBC.cpp.o
2 warnings generated.
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/ConvolutionTranspose2d.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/ConvolutionTranspose3d.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Copy.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Cross.cpp.o
1 warning generated.
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/DilatedConvolution.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/DilatedMaxPool2d.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/DilatedMaxPool3d.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/DispatchStub.cpp.o
/root/pytorch/aten/src/ATen/native/ConstantPadNd.cpp:14:23: warning: comparison of integers of different signs: 'long' and 'unsigned long' [-Wsign-compare]
TORCH_CHECK(l_inp >= l_pad, "Length of pad should be no more than twice the number of "
 ~~~~~ ^ ~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/ConstantPadNd.cpp:44:23: warning: comparison of integers of different signs: 'int' and 'unsigned long' [-Wsign-compare]
for (int i = 0; i < l_diff; i ++) {
 ~ ^ ~~~~~~
/root/pytorch/aten/src/ATen/native/ConstantPadNd.cpp:48:23: warning: comparison of integers of different signs: 'int' and 'unsigned long' [-Wsign-compare]
for (int i = 0; i < l_pad; i++) {
 ~ ^ ~~~~~
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Distance.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Distributions.cpp.o
3 warnings generated.
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Dropout.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Embedding.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/EmbeddingBag.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Fill.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/FractionalMaxPool2d.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/FractionalMaxPool3d.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/GridSampler.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Im2Col.cpp.o
2 warnings generated.
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Indexing.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Integration.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Itertools.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/LegacyBridge.cpp.o
/root/pytorch/aten/src/ATen/native/EmbeddingBag.cpp:88:26: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t i = 1; i < offsets.numel(); ++i) {
 ~ ^ ~~~~~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/EmbeddingBag.cpp:177:26: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t i = 1; i < offsets.numel(); ++i) {
 ~ ^ ~~~~~~~~~~~~~~~
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/LegacyDefinitions.cpp.o
[ 58%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/LegacyNNDefinitions.cpp.o
In file included from /root/pytorch/aten/src/ATen/native/Im2Col.cpp:8:
/root/pytorch/aten/src/ATen/native/im2col_shape_check.h:169:11: warning: unused variable 'n_input_plane' [-Wunused-variable]
int64_t n_input_plane = input.size(dim_batch + 1);
 ^
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Lerp.cpp.o
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Linear.cpp.o
In file included from /root/pytorch/aten/src/ATen/native/Indexing.cpp:52:
/root/pytorch/aten/src/ATen/native/IndexingUtils.h:88:1: warning: unused function 'transposeToFrontAndInvPerm' [-Wunused-function]
transposeToFrontAndInvPerm(Tensor self, TensorList indices) {
^
1 warning generated.
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/LinearAlgebra.cpp.o
/root/pytorch/aten/src/ATen/native/LegacyBridge.cpp:8:15: warning: unused function '_has_native' [-Wunused-function]
static bool _has_native(const Tensor& self) {
 ^
/root/pytorch/aten/src/ATen/native/GridSampler.cpp:92:22: warning: function 'within_bounds_2d' is not needed and will not be emitted [-Wunneeded-internal-declaration]
static inline bool within_bounds_2d(int64_t h, int64_t w, int64_t H, int64_t W) {
 ^
1 warning generated.
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Loss.cpp.o
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/LossCTC.cpp.o
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/MaxUnpooling.cpp.o
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Memory.cpp.o
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/NNPACK.cpp.o
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/NamedTensor.cpp.o
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Normalization.cpp.o
2 warnings generated.
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Onehot.cpp.o
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/PackedSequence.cpp.o
1 warning generated.
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/PixelShuffle.cpp.o
In file included from /root/pytorch/aten/src/ATen/native/LinearAlgebra.cpp:5:
/root/pytorch/aten/src/ATen/native/LinearAlgebraUtils.h:146:24: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'long' [-Wsign-compare]
for (size_t i = 0; i < batch_size; i++) {
 ~ ^ ~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/LinearAlgebraUtils.h:217:27: warning: comparison of integers of different signs: 'std::vector::size_type' (aka 'unsigned long') and 'const int64_t' (aka 'const long') [-Wsign-compare]
TORCH_CHECK(perm.size() == ndim,
 ~~~~~~~~~~~ ^ ~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
1 warning generated.
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Pooling.cpp.o
/root/pytorch/aten/src/ATen/native/MaxUnpooling.cpp:103:13: warning: unused variable 'numBatch' [-Wunused-variable]
int64_t numBatch = 1;
 ^
/root/pytorch/aten/src/ATen/native/LinearAlgebra.cpp:590:27: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'unsigned long' [-Wsign-compare]
for (int64_t i = 0; i < n; i++) {
 ~ ^ ~
/root/pytorch/aten/src/ATen/native/LinearAlgebra.cpp:607:27: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'unsigned long' [-Wsign-compare]
for (int64_t l = 1; l < n; l++) {
 ~ ^ ~
/root/pytorch/aten/src/ATen/native/LinearAlgebra.cpp:608:29: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'unsigned long' [-Wsign-compare]
for (int64_t i = 0; i < n - l; i++) {
 ~ ^ ~~~~~
/root/pytorch/aten/src/ATen/native/LossCTC.cpp:356:27: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'size_t' (aka 'unsigned long') [-Wsign-compare]
for (int64_t b = 0; b < input_lengths.size(); b++) {
 ~ ^ ~~~~~~~~~~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/LossCTC.cpp:359:27: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'size_t' (aka 'unsigned long') [-Wsign-compare]
for (int64_t b = 0; b < target_lengths.size(); b++) {
 ~ ^ ~~~~~~~~~~~~~~~~~~~~~
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/QuantizedLinear.cpp.o
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/RNN.cpp.o
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/RangeFactories.cpp.o
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/ReduceOps.cpp.o
/root/pytorch/aten/src/ATen/native/NNPACK.cpp:129:15: warning: unused variable 'input_channels_dim' [-Wunused-const-variable]
constexpr int input_channels_dim = 1;
 ^
/root/pytorch/aten/src/ATen/native/NNPACK.cpp:137:15: warning: unused variable 'weight_input_channels_dim' [-Wunused-const-variable]
constexpr int weight_input_channels_dim = 1;
 ^
/root/pytorch/aten/src/ATen/native/NNPACK.cpp:138:15: warning: unused variable 'weight_height_dim' [-Wunused-const-variable]
constexpr int weight_height_dim = 2;
 ^
/root/pytorch/aten/src/ATen/native/NNPACK.cpp:139:15: warning: unused variable 'weight_width_dim' [-Wunused-const-variable]
constexpr int weight_width_dim = 3;
 ^
/root/pytorch/aten/src/ATen/native/NNPACK.cpp:142:15: warning: unused variable 'max_dim' [-Wunused-const-variable]
constexpr int max_dim = 3;
 ^
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/ReflectionPad.cpp.o
1 warning generated.
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Repeat.cpp.o
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/ReplicationPadding.cpp.o
5 warnings generated.
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Resize.cpp.o
In file included from /root/pytorch/aten/src/ATen/native/QuantizedLinear.cpp:8:
/root/pytorch/third_party/fbgemm/include/fbgemm/FbgemmFP16.h:134:24: warning: comparison of integers of different signs: 'uint64_t' (aka 'unsigned long') and 'int' [-Wsign-compare]
((block_row_id != nbrow_ - 1) ? (blockRowSize() * blockColSize())
 ~~~~~~~~~~~~ ^ ~~~~~~~~~~
In file included from /root/pytorch/aten/src/ATen/native/QuantizedLinear.cpp:9:
/root/pytorch/third_party/fbgemm/include/fbgemm/QuantUtils.h:127:29: warning: comparison of integers of different signs: 'std::size_t' (aka 'unsigned long') and 'int' [-Wsign-compare]
for (std::size_t i = 0; i < len; i++) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/QuantizedLinear.cpp:169:24: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int' [-Wsign-compare]
for (size_t i = 0; i < N; ++i) {
 ~ ^ ~
/root/pytorch/aten/src/ATen/native/QuantizedLinear.cpp:171:26: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int' [-Wsign-compare]
for (size_t j = 0; j < K; ++j) {
 ~ ^ ~
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Scalar.cpp.o
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/SobolEngineOps.cpp.o
In file included from /root/pytorch/aten/src/ATen/native/QuantizedLinear.cpp:7:
/root/pytorch/third_party/fbgemm/include/fbgemm/Fbgemm.h:1431:14: warning: unused function 'fbgemmAlignedAlloc' [-Wunused-function]
static void* fbgemmAlignedAlloc(size_t __align, size_t __size) {
 ^
/root/pytorch/third_party/fbgemm/include/fbgemm/Fbgemm.h:1460:13: warning: unused function 'fbgemmGetRange' [-Wunused-function]
static void fbgemmGetRange(
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:172:25: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int64_t i = 0; i < vals.size(); i += 2) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:698:68: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::pair_vec<at::Tensor>' requested here
auto bidir_result = apply_layer_stack(BidirLayer{cell}, input, pair_vec(hiddens), pair_vec(params), num_layers, dropout_p, train);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:711:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::CellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::Tensor>' requested here
auto result = _rnn_impl<CellType, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:875:1: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl_with_concat<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::CellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::Tensor>' requested here
ONE_HIDDEN_RNN(gru, GRUCell<CellParams>)
^
/root/pytorch/aten/src/ATen/native/RNN.cpp:785:18: note: expanded from macro 'ONE_HIDDEN_RNN'
auto results = _rnn_impl_with_concat<CELL, FullLayer, FullBidirectionalLayer>( \
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:172:25: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int64_t i = 0; i < vals.size(); i += 2) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:698:87: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::pair_vec<at::native::(anonymous namespace)::CellParams>' requested here
auto bidir_result = apply_layer_stack(BidirLayer{cell}, input, pair_vec(hiddens), pair_vec(params), num_layers, dropout_p, train);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:711:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::CellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::Tensor>' requested here
auto result = _rnn_impl<CellType, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:875:1: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl_with_concat<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::CellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::Tensor>' requested here
ONE_HIDDEN_RNN(gru, GRUCell<CellParams>)
^
/root/pytorch/aten/src/ATen/native/RNN.cpp:785:18: note: expanded from macro 'ONE_HIDDEN_RNN'
auto results = _rnn_impl_with_concat<CELL, FullLayer, FullBidirectionalLayer>( \
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:664:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == hiddens.size(), "Expected more hidden states in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:698:25: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::apply_layer_stack<at::Tensor, std::pair<at::Tensor, at::Tensor>, std::pair<at::native::(anonymous namespace)::CellParams, at::native::(anonymous namespace)::CellParams> >' requested here
auto bidir_result = apply_layer_stack(BidirLayer{cell}, input, pair_vec(hiddens), pair_vec(params), num_layers, dropout_p, train);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:711:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::CellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::Tensor>' requested here
auto result = _rnn_impl<CellType, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:875:1: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl_with_concat<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::CellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::Tensor>' requested here
ONE_HIDDEN_RNN(gru, GRUCell<CellParams>)
^
/root/pytorch/aten/src/ATen/native/RNN.cpp:785:18: note: expanded from macro 'ONE_HIDDEN_RNN'
auto results = _rnn_impl_with_concat<CELL, FullLayer, FullBidirectionalLayer>( \
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:665:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == weights.size(), "Expected more weights in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:183:25: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int64_t i = 0; i < vals.size(); i++) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:699:35: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::unpair_vec<at::Tensor>' requested here
return {bidir_result.outputs, unpair_vec(std::move(bidir_result.final_hidden))};
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:711:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::CellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::Tensor>' requested here
auto result = _rnn_impl<CellType, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:875:1: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl_with_concat<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::CellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::Tensor>' requested here
ONE_HIDDEN_RNN(gru, GRUCell<CellParams>)
^
/root/pytorch/aten/src/ATen/native/RNN.cpp:785:18: note: expanded from macro 'ONE_HIDDEN_RNN'
auto results = _rnn_impl_with_concat<CELL, FullLayer, FullBidirectionalLayer>( \
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:664:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == hiddens.size(), "Expected more hidden states in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:701:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::apply_layer_stack<at::Tensor, at::Tensor, at::native::(anonymous namespace)::CellParams>' requested here
return apply_layer_stack(LayerT<hidden_type,cell_params>{cell}, input, hiddens, params, num_layers, dropout_p, train);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:711:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::CellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::Tensor>' requested here
auto result = _rnn_impl<CellType, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:875:1: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl_with_concat<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::CellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::Tensor>' requested here
ONE_HIDDEN_RNN(gru, GRUCell<CellParams>)
^
/root/pytorch/aten/src/ATen/native/RNN.cpp:785:18: note: expanded from macro 'ONE_HIDDEN_RNN'
auto results = _rnn_impl_with_concat<CELL, FullLayer, FullBidirectionalLayer>( \
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:665:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == weights.size(), "Expected more weights in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:664:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == hiddens.size(), "Expected more hidden states in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:698:25: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::apply_layer_stack<at::native::(anonymous namespace)::PackedSequence, std::pair<at::Tensor, at::Tensor>, std::pair<at::native::(anonymous namespace)::CellParams, at::native::(anonymous namespace)::CellParams> >' requested here
auto bidir_result = apply_layer_stack(BidirLayer{cell}, input, pair_vec(hiddens), pair_vec(params), num_layers, dropout_p, train);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:711:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::CellParams>, PackedLayer, PackedBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::native::(anonymous namespace)::PackedSequence>' requested here
auto result = _rnn_impl<CellType, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:875:1: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl_with_concat<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::CellParams>, PackedLayer, PackedBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::native::(anonymous namespace)::PackedSequence>' requested here
ONE_HIDDEN_RNN(gru, GRUCell<CellParams>)
^
/root/pytorch/aten/src/ATen/native/RNN.cpp:817:17: note: expanded from macro 'ONE_HIDDEN_RNN'
auto result = _rnn_impl_with_concat<CELL, PackedLayer, PackedBidirectionalLayer>( \
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:665:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == weights.size(), "Expected more weights in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:664:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == hiddens.size(), "Expected more hidden states in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:701:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::apply_layer_stack<at::native::(anonymous namespace)::PackedSequence, at::Tensor, at::native::(anonymous namespace)::CellParams>' requested here
return apply_layer_stack(LayerT<hidden_type,cell_params>{cell}, input, hiddens, params, num_layers, dropout_p, train);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:711:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::CellParams>, PackedLayer, PackedBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::native::(anonymous namespace)::PackedSequence>' requested here
auto result = _rnn_impl<CellType, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:875:1: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl_with_concat<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::CellParams>, PackedLayer, PackedBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::native::(anonymous namespace)::PackedSequence>' requested here
ONE_HIDDEN_RNN(gru, GRUCell<CellParams>)
^
/root/pytorch/aten/src/ATen/native/RNN.cpp:817:17: note: expanded from macro 'ONE_HIDDEN_RNN'
auto result = _rnn_impl_with_concat<CELL, PackedLayer, PackedBidirectionalLayer>( \
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:665:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == weights.size(), "Expected more weights in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
2 warnings generated.
/root/pytorch/aten/src/ATen/native/RNN.cpp:172:25: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int64_t i = 0; i < vals.size(); i += 2) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:698:87: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::pair_vec<at::native::(anonymous namespace)::QuantizedCellParams>' requested here
auto bidir_result = apply_layer_stack(BidirLayer{cell}, input, pair_vec(hiddens), pair_vec(params), num_layers, dropout_p, train);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:711:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::QuantizedCellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::QuantizedCellParams, at::Tensor>' requested here
auto result = _rnn_impl<CellType, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:876:1: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl_with_concat<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::QuantizedCellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::QuantizedCellParams, at::Tensor>' requested here
ONE_HIDDEN_QRNN(quantized_gru, GRUCell<QuantizedCellParams>)
^
/root/pytorch/aten/src/ATen/native/RNN.cpp:843:18: note: expanded from macro 'ONE_HIDDEN_QRNN'
auto results = _rnn_impl_with_concat<CELL, FullLayer, FullBidirectionalLayer>( \
 ^
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/SoftMax.cpp.o
/root/pytorch/aten/src/ATen/native/RNN.cpp:664:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == hiddens.size(), "Expected more hidden states in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:698:25: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::apply_layer_stack<at::Tensor, std::pair<at::Tensor, at::Tensor>, std::pair<at::native::(anonymous namespace)::QuantizedCellParams, at::native::(anonymous namespace)::QuantizedCellParams> >' requested here
auto bidir_result = apply_layer_stack(BidirLayer{cell}, input, pair_vec(hiddens), pair_vec(params), num_layers, dropout_p, train);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:711:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::QuantizedCellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::QuantizedCellParams, at::Tensor>' requested here
auto result = _rnn_impl<CellType, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:876:1: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl_with_concat<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::QuantizedCellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::QuantizedCellParams, at::Tensor>' requested here
ONE_HIDDEN_QRNN(quantized_gru, GRUCell<QuantizedCellParams>)
^
/root/pytorch/aten/src/ATen/native/RNN.cpp:843:18: note: expanded from macro 'ONE_HIDDEN_QRNN'
auto results = _rnn_impl_with_concat<CELL, FullLayer, FullBidirectionalLayer>( \
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:665:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == weights.size(), "Expected more weights in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:664:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == hiddens.size(), "Expected more hidden states in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:701:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::apply_layer_stack<at::Tensor, at::Tensor, at::native::(anonymous namespace)::QuantizedCellParams>' requested here
return apply_layer_stack(LayerT<hidden_type,cell_params>{cell}, input, hiddens, params, num_layers, dropout_p, train);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:711:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::QuantizedCellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::QuantizedCellParams, at::Tensor>' requested here
auto result = _rnn_impl<CellType, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:876:1: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl_with_concat<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::QuantizedCellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::QuantizedCellParams, at::Tensor>' requested here
ONE_HIDDEN_QRNN(quantized_gru, GRUCell<QuantizedCellParams>)
^
/root/pytorch/aten/src/ATen/native/RNN.cpp:843:18: note: expanded from macro 'ONE_HIDDEN_QRNN'
auto results = _rnn_impl_with_concat<CELL, FullLayer, FullBidirectionalLayer>( \
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:665:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == weights.size(), "Expected more weights in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:664:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == hiddens.size(), "Expected more hidden states in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:698:25: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::apply_layer_stack<at::native::(anonymous namespace)::PackedSequence, std::pair<at::Tensor, at::Tensor>, std::pair<at::native::(anonymous namespace)::QuantizedCellParams, at::native::(anonymous namespace)::QuantizedCellParams> >' requested here
auto bidir_result = apply_layer_stack(BidirLayer{cell}, input, pair_vec(hiddens), pair_vec(params), num_layers, dropout_p, train);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:711:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::QuantizedCellParams>, PackedLayer, PackedBidirectionalLayer, at::native::(anonymous namespace)::QuantizedCellParams, at::native::(anonymous namespace)::PackedSequence>' requested here
auto result = _rnn_impl<CellType, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:876:1: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl_with_concat<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::QuantizedCellParams>, PackedLayer, PackedBidirectionalLayer, at::native::(anonymous namespace)::QuantizedCellParams, at::native::(anonymous namespace)::PackedSequence>' requested here
ONE_HIDDEN_QRNN(quantized_gru, GRUCell<QuantizedCellParams>)
^
/root/pytorch/aten/src/ATen/native/RNN.cpp:869:17: note: expanded from macro 'ONE_HIDDEN_QRNN'
auto result = _rnn_impl_with_concat<CELL, PackedLayer, PackedBidirectionalLayer>( \
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:665:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == weights.size(), "Expected more weights in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:664:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == hiddens.size(), "Expected more hidden states in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:701:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::apply_layer_stack<at::native::(anonymous namespace)::PackedSequence, at::Tensor, at::native::(anonymous namespace)::QuantizedCellParams>' requested here
return apply_layer_stack(LayerT<hidden_type,cell_params>{cell}, input, hiddens, params, num_layers, dropout_p, train);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:711:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::QuantizedCellParams>, PackedLayer, PackedBidirectionalLayer, at::native::(anonymous namespace)::QuantizedCellParams, at::native::(anonymous namespace)::PackedSequence>' requested here
auto result = _rnn_impl<CellType, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:876:1: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl_with_concat<at::native::(anonymous namespace)::GRUCell<at::native::(anonymous namespace)::QuantizedCellParams>, PackedLayer, PackedBidirectionalLayer, at::native::(anonymous namespace)::QuantizedCellParams, at::native::(anonymous namespace)::PackedSequence>' requested here
ONE_HIDDEN_QRNN(quantized_gru, GRUCell<QuantizedCellParams>)
^
/root/pytorch/aten/src/ATen/native/RNN.cpp:869:17: note: expanded from macro 'ONE_HIDDEN_QRNN'
auto result = _rnn_impl_with_concat<CELL, PackedLayer, PackedBidirectionalLayer>( \
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:665:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == weights.size(), "Expected more weights in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:172:25: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int64_t i = 0; i < vals.size(); i += 2) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:698:68: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::pair_vec<std::tuple<at::Tensor, at::Tensor> >' requested here
auto bidir_result = apply_layer_stack(BidirLayer{cell}, input, pair_vec(hiddens), pair_vec(params), num_layers, dropout_p, train);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:731:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::LSTMCell<at::native::(anonymous namespace)::CellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::Tensor>' requested here
auto result = _rnn_impl<LSTMCell<cell_params>, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:912:18: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_lstm_impl<FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::Tensor>' requested here
auto results = _lstm_impl<FullLayer, FullBidirectionalLayer>(
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:664:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == hiddens.size(), "Expected more hidden states in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:698:25: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::apply_layer_stack<at::Tensor, std::pair<std::tuple<at::Tensor, at::Tensor>, std::tuple<at::Tensor, at::Tensor> >, std::pair<at::native::(anonymous namespace)::CellParams, at::native::(anonymous namespace)::CellParams> >' requested here
auto bidir_result = apply_layer_stack(BidirLayer{cell}, input, pair_vec(hiddens), pair_vec(params), num_layers, dropout_p, train);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:731:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::LSTMCell<at::native::(anonymous namespace)::CellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::Tensor>' requested here
auto result = _rnn_impl<LSTMCell<cell_params>, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:912:18: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_lstm_impl<FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::Tensor>' requested here
auto results = _lstm_impl<FullLayer, FullBidirectionalLayer>(
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:665:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == weights.size(), "Expected more weights in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:183:25: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int64_t i = 0; i < vals.size(); i++) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:699:35: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::unpair_vec<std::tuple<at::Tensor, at::Tensor> >' requested here
return {bidir_result.outputs, unpair_vec(std::move(bidir_result.final_hidden))};
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:731:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::LSTMCell<at::native::(anonymous namespace)::CellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::Tensor>' requested here
auto result = _rnn_impl<LSTMCell<cell_params>, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:912:18: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_lstm_impl<FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::Tensor>' requested here
auto results = _lstm_impl<FullLayer, FullBidirectionalLayer>(
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:664:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == hiddens.size(), "Expected more hidden states in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:701:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::apply_layer_stack<at::Tensor, std::tuple<at::Tensor, at::Tensor>, at::native::(anonymous namespace)::CellParams>' requested here
return apply_layer_stack(LayerT<hidden_type,cell_params>{cell}, input, hiddens, params, num_layers, dropout_p, train);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:731:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::LSTMCell<at::native::(anonymous namespace)::CellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::Tensor>' requested here
auto result = _rnn_impl<LSTMCell<cell_params>, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:912:18: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_lstm_impl<FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::Tensor>' requested here
auto results = _lstm_impl<FullLayer, FullBidirectionalLayer>(
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:665:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == weights.size(), "Expected more weights in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:664:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == hiddens.size(), "Expected more hidden states in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:698:25: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::apply_layer_stack<at::native::(anonymous namespace)::PackedSequence, std::pair<std::tuple<at::Tensor, at::Tensor>, std::tuple<at::Tensor, at::Tensor> >, std::pair<at::native::(anonymous namespace)::CellParams, at::native::(anonymous namespace)::CellParams> >' requested here
auto bidir_result = apply_layer_stack(BidirLayer{cell}, input, pair_vec(hiddens), pair_vec(params), num_layers, dropout_p, train);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:731:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::LSTMCell<at::native::(anonymous namespace)::CellParams>, PackedLayer, PackedBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::native::(anonymous namespace)::PackedSequence>' requested here
auto result = _rnn_impl<LSTMCell<cell_params>, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:941:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_lstm_impl<PackedLayer, PackedBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::native::(anonymous namespace)::PackedSequence>' requested here
auto result = _lstm_impl<PackedLayer, PackedBidirectionalLayer>(
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:665:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == weights.size(), "Expected more weights in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:664:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == hiddens.size(), "Expected more hidden states in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:701:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::apply_layer_stack<at::native::(anonymous namespace)::PackedSequence, std::tuple<at::Tensor, at::Tensor>, at::native::(anonymous namespace)::CellParams>' requested here
return apply_layer_stack(LayerT<hidden_type,cell_params>{cell}, input, hiddens, params, num_layers, dropout_p, train);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:731:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::LSTMCell<at::native::(anonymous namespace)::CellParams>, PackedLayer, PackedBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::native::(anonymous namespace)::PackedSequence>' requested here
auto result = _rnn_impl<LSTMCell<cell_params>, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:941:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_lstm_impl<PackedLayer, PackedBidirectionalLayer, at::native::(anonymous namespace)::CellParams, at::native::(anonymous namespace)::PackedSequence>' requested here
auto result = _lstm_impl<PackedLayer, PackedBidirectionalLayer>(
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:665:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == weights.size(), "Expected more weights in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:664:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == hiddens.size(), "Expected more hidden states in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:698:25: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::apply_layer_stack<at::Tensor, std::pair<std::tuple<at::Tensor, at::Tensor>, std::tuple<at::Tensor, at::Tensor> >, std::pair<at::native::(anonymous namespace)::QuantizedCellParams, at::native::(anonymous namespace)::QuantizedCellParams> >' requested here
auto bidir_result = apply_layer_stack(BidirLayer{cell}, input, pair_vec(hiddens), pair_vec(params), num_layers, dropout_p, train);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:731:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::LSTMCell<at::native::(anonymous namespace)::QuantizedCellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::QuantizedCellParams, at::Tensor>' requested here
auto result = _rnn_impl<LSTMCell<cell_params>, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:1000:15: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_lstm_impl<FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::QuantizedCellParams, at::Tensor>' requested here
results = _lstm_impl<FullLayer, FullBidirectionalLayer>(
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:665:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == weights.size(), "Expected more weights in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:664:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == hiddens.size(), "Expected more hidden states in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:701:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::apply_layer_stack<at::Tensor, std::tuple<at::Tensor, at::Tensor>, at::native::(anonymous namespace)::QuantizedCellParams>' requested here
return apply_layer_stack(LayerT<hidden_type,cell_params>{cell}, input, hiddens, params, num_layers, dropout_p, train);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:731:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::LSTMCell<at::native::(anonymous namespace)::QuantizedCellParams>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::QuantizedCellParams, at::Tensor>' requested here
auto result = _rnn_impl<LSTMCell<cell_params>, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:1000:15: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_lstm_impl<FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::QuantizedCellParams, at::Tensor>' requested here
results = _lstm_impl<FullLayer, FullBidirectionalLayer>(
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:665:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == weights.size(), "Expected more weights in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:172:25: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int64_t i = 0; i < vals.size(); i += 2) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:698:87: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::pair_vec<at::native::(anonymous namespace)::QuantizedCellParamsFP16>' requested here
auto bidir_result = apply_layer_stack(BidirLayer{cell}, input, pair_vec(hiddens), pair_vec(params), num_layers, dropout_p, train);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:731:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::LSTMCell<at::native::(anonymous namespace)::QuantizedCellParamsFP16>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::QuantizedCellParamsFP16, at::Tensor>' requested here
auto result = _rnn_impl<LSTMCell<cell_params>, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:1005:15: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_lstm_impl<FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::QuantizedCellParamsFP16, at::Tensor>' requested here
results = _lstm_impl<FullLayer, FullBidirectionalLayer>(
 ^
In file included from /root/pytorch/aten/src/ATen/native/ReduceOps.cpp:9:
/root/pytorch/aten/src/ATen/native/ReduceOpsUtils.h:32:13: warning: unused function '_dimreduce_return_trivial_no_ident' [-Wunused-function]
static bool _dimreduce_return_trivial_no_ident(Tensor &result, const Tensor &self,
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:664:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == hiddens.size(), "Expected more hidden states in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:698:25: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::apply_layer_stack<at::Tensor, std::pair<std::tuple<at::Tensor, at::Tensor>, std::tuple<at::Tensor, at::Tensor> >, std::pair<at::native::(anonymous namespace)::QuantizedCellParamsFP16, at::native::(anonymous namespace)::QuantizedCellParamsFP16> >' requested here
auto bidir_result = apply_layer_stack(BidirLayer{cell}, input, pair_vec(hiddens), pair_vec(params), num_layers, dropout_p, train);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:731:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::LSTMCell<at::native::(anonymous namespace)::QuantizedCellParamsFP16>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::QuantizedCellParamsFP16, at::Tensor>' requested here
auto result = _rnn_impl<LSTMCell<cell_params>, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:1005:15: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_lstm_impl<FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::QuantizedCellParamsFP16, at::Tensor>' requested here
results = _lstm_impl<FullLayer, FullBidirectionalLayer>(
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:665:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == weights.size(), "Expected more weights in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:664:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == hiddens.size(), "Expected more hidden states in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/RNN.cpp:701:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::apply_layer_stack<at::Tensor, std::tuple<at::Tensor, at::Tensor>, at::native::(anonymous namespace)::QuantizedCellParamsFP16>' requested here
return apply_layer_stack(LayerT<hidden_type,cell_params>{cell}, input, hiddens, params, num_layers, dropout_p, train);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:731:17: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_rnn_impl<at::native::(anonymous namespace)::LSTMCell<at::native::(anonymous namespace)::QuantizedCellParamsFP16>, FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::QuantizedCellParamsFP16, at::Tensor>' requested here
auto result = _rnn_impl<LSTMCell<cell_params>, LayerT, BidirLayerT>(input, params, hiddens, num_layers, dropout_p, train, bidirectional);
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:1005:15: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_lstm_impl<FullLayer, FullBidirectionalLayer, at::native::(anonymous namespace)::QuantizedCellParamsFP16, at::Tensor>' requested here
results = _lstm_impl<FullLayer, FullBidirectionalLayer>(
 ^
/root/pytorch/aten/src/ATen/native/RNN.cpp:665:26: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(num_layers == weights.size(), "Expected more weights in stacked_rnn");
 ~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
5 warnings generated.
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Sorting.cpp.o
/root/pytorch/aten/src/ATen/native/ReplicationPadding.cpp:572:11: warning: unused variable 'nslices' [-Wunused-variable]
int64_t nslices = input.size(dimslices);
 ^
6 warnings generated.
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/SpectralOps.cpp.o
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/SummaryOps.cpp.o
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/TensorCompare.cpp.o
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/TensorConversions.cpp.o
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/TensorFactories.cpp.o
[ 59%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/TensorIterator.cpp.o
1 warning generated.
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/TensorIteratorReduce.cpp.o
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/TensorProperties.cpp.o
/root/pytorch/aten/src/ATen/native/Sorting.cpp:42:26: warning: unused variable 'swap' [-Wunused-variable]
int64_t P, L, R, i, j, swap;
 ^
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/TensorShape.cpp.o
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/TensorTransformations.cpp.o
1 warning generated.
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/TypeProperties.cpp.o
/root/pytorch/aten/src/ATen/native/Sorting.cpp:43:12: warning: unused variable 'rswap' [-Wunused-variable]
scalar_t rswap, piv;
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:138:11: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::quick_select_template<unsigned char, (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:125:3), (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:125:3)>' requested here
quick_select_template(
 ^
In file included from /root/pytorch/aten/src/ATen/native/Sorting.cpp:7:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/aten/src/ATen/native/Sorting.cpp:126:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:125:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/aten/src/ATen/native/Sorting.cpp:7:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/Sorting.cpp:43:12: warning: unused variable 'rswap' [-Wunused-variable]
scalar_t rswap, piv;
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:138:11: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::quick_select_template<signed char, (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:125:3), (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:125:3)>' requested here
quick_select_template(
 ^
In file included from /root/pytorch/aten/src/ATen/native/Sorting.cpp:7:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/aten/src/ATen/native/Sorting.cpp:126:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:125:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/aten/src/ATen/native/Sorting.cpp:7:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/Sorting.cpp:43:12: warning: unused variable 'rswap' [-Wunused-variable]
scalar_t rswap, piv;
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:138:11: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::quick_select_template<double, (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:125:3), (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:125:3)>' requested here
quick_select_template(
 ^
In file included from /root/pytorch/aten/src/ATen/native/Sorting.cpp:7:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/aten/src/ATen/native/Sorting.cpp:126:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:125:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/aten/src/ATen/native/Sorting.cpp:7:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/Sorting.cpp:43:12: warning: unused variable 'rswap' [-Wunused-variable]
scalar_t rswap, piv;
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:138:11: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::quick_select_template<float, (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:125:3), (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:125:3)>' requested here
quick_select_template(
 ^
In file included from /root/pytorch/aten/src/ATen/native/Sorting.cpp:7:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/aten/src/ATen/native/Sorting.cpp:126:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:125:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/aten/src/ATen/native/Sorting.cpp:7:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/Sorting.cpp:43:12: warning: unused variable 'rswap' [-Wunused-variable]
scalar_t rswap, piv;
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:138:11: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::quick_select_template<int, (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:125:3), (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:125:3)>' requested here
quick_select_template(
 ^
In file included from /root/pytorch/aten/src/ATen/native/Sorting.cpp:7:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/aten/src/ATen/native/Sorting.cpp:126:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:125:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/aten/src/ATen/native/Sorting.cpp:7:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/Sorting.cpp:43:12: warning: unused variable 'rswap' [-Wunused-variable]
scalar_t rswap, piv;
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:138:11: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::quick_select_template<long, (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:125:3), (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:125:3)>' requested here
quick_select_template(
 ^
In file included from /root/pytorch/aten/src/ATen/native/Sorting.cpp:7:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/aten/src/ATen/native/Sorting.cpp:126:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:125:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/aten/src/ATen/native/Sorting.cpp:7:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/Sorting.cpp:43:12: warning: unused variable 'rswap' [-Wunused-variable]
scalar_t rswap, piv;
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:138:11: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::quick_select_template<short, (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:125:3), (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:125:3)>' requested here
quick_select_template(
 ^
In file included from /root/pytorch/aten/src/ATen/native/Sorting.cpp:7:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/aten/src/ATen/native/Sorting.cpp:126:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:125:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/aten/src/ATen/native/Sorting.cpp:7:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/Sorting.cpp:43:12: warning: unused variable 'rswap' [-Wunused-variable]
scalar_t rswap, piv;
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:242:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::quick_select_template<unsigned char, (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:238:3), (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:238:3)>' requested here
quick_select_template(
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:43:12: warning: unused variable 'rswap' [-Wunused-variable]
scalar_t rswap, piv;
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:242:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::quick_select_template<signed char, (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:238:3), (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:238:3)>' requested here
quick_select_template(
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:43:12: warning: unused variable 'rswap' [-Wunused-variable]
scalar_t rswap, piv;
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:242:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::quick_select_template<double, (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:238:3), (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:238:3)>' requested here
quick_select_template(
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:43:12: warning: unused variable 'rswap' [-Wunused-variable]
scalar_t rswap, piv;
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:242:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::quick_select_template<float, (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:238:3), (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:238:3)>' requested here
quick_select_template(
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:43:12: warning: unused variable 'rswap' [-Wunused-variable]
scalar_t rswap, piv;
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:242:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::quick_select_template<int, (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:238:3), (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:238:3)>' requested here
quick_select_template(
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:43:12: warning: unused variable 'rswap' [-Wunused-variable]
scalar_t rswap, piv;
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:242:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::quick_select_template<long, (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:238:3), (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:238:3)>' requested here
quick_select_template(
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:43:12: warning: unused variable 'rswap' [-Wunused-variable]
scalar_t rswap, piv;
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:242:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::quick_select_template<short, (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:238:3), (lambda at /root/pytorch/aten/src/ATen/native/Sorting.cpp:238:3)>' requested here
quick_select_template(
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:33:19: warning: unused variable 'MAX_LEVELS' [-Wunused-const-variable]
constexpr int64_t MAX_LEVELS = 300;
 ^
/root/pytorch/aten/src/ATen/native/Sorting.cpp:34:19: warning: unused variable 'M_SMALL' [-Wunused-const-variable]
constexpr int64_t M_SMALL = 10; // Limit for small subfiles
 ^
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/UnaryOps.cpp.o
/root/pytorch/aten/src/ATen/native/TensorIterator.cpp:349:25: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int' [-Wsign-compare]
AT_ASSERT(perm.size() == ndim());
 ~~~~~~~~~~~ ^ ~~~~~~
/root/pytorch/c10/util/Exception.h:349:54: note: expanded from macro 'AT_ASSERT'
C10_EXPAND_MSVC_WORKAROUND(TORCH_INTERNAL_ASSERT(__VA_ARGS__)); \
 ^~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:202:31: note: expanded from macro 'TORCH_INTERNAL_ASSERT'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/c10/util/Exception.h:146:39: note: expanded from macro 'C10_EXPAND_MSVC_WORKAROUND'
#define C10_EXPAND_MSVC_WORKAROUND(x) x
 ^
/root/pytorch/aten/src/ATen/native/TensorIterator.cpp:440:25: warning: comparison of integers of different signs: 'c10::SmallVectorTemplateCommon::size_type' (aka 'unsigned long') and 'int' [-Wsign-compare]
while (strides.size() < 2 * ntensors()) {
 ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/Unique.cpp.o
/root/pytorch/aten/src/ATen/native/TensorIteratorReduce.cpp:137:23: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
for (int i = 0; i < non_reduced_shape.size(); ++i) {
 ~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /root/pytorch/aten/src/ATen/native/TensorCompare.cpp:6:
/root/pytorch/aten/src/ATen/native/ReduceOpsUtils.h:15:13: warning: unused function '_dimreduce_return_trivial' [-Wunused-function]
static bool _dimreduce_return_trivial(Tensor &result, const Tensor &self,
 ^
/root/pytorch/aten/src/ATen/native/ReduceOpsUtils.h:47:30: warning: unused function '_allreduce_return_trivial' [-Wunused-function]
static c10::optional<Tensor> _allreduce_return_trivial(
 ^
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/UpSampleBicubic2d.cpp.o
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/UpSampleBilinear2d.cpp.o
/root/pytorch/aten/src/ATen/native/TensorShape.cpp:27:24: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t i = 0; i < shape_tensor.numel(); ++i) {
 ~ ^ ~~~~~~~~~~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/TensorShape.cpp:60:25: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'size_t' (aka 'unsigned long') [-Wsign-compare]
for (int64_t i = 0; i < s1.size(); ++i) {
 ~ ^ ~~~~~~~~~
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/UpSampleLinear1d.cpp.o
1 warning generated.
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/UpSampleNearest1d.cpp.o
38 warnings generated.
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/UpSampleNearest2d.cpp.o
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/UpSampleNearest3d.cpp.o
/root/pytorch/aten/src/ATen/native/Unique.cpp:208:27: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int64_t i = 0; i < indices.size(); ++i) {
 ~ ^ ~~~~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/Unique.cpp:208:27: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int64_t i = 0; i < indices.size(); ++i) {
 ~ ^ ~~~~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/Unique.cpp:256:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_unique_dim_cpu_template<unsigned char>' requested here
return _unique_dim_cpu_template<scalar_t>(self, dim, false, return_inverse, return_counts);
 ^
/root/pytorch/aten/src/ATen/native/Unique.cpp:208:27: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int64_t i = 0; i < indices.size(); ++i) {
 ~ ^ ~~~~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/Unique.cpp:256:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_unique_dim_cpu_template<signed char>' requested here
return _unique_dim_cpu_template<scalar_t>(self, dim, false, return_inverse, return_counts);
 ^
/root/pytorch/aten/src/ATen/native/Unique.cpp:208:27: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int64_t i = 0; i < indices.size(); ++i) {
 ~ ^ ~~~~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/Unique.cpp:256:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_unique_dim_cpu_template<double>' requested here
return _unique_dim_cpu_template<scalar_t>(self, dim, false, return_inverse, return_counts);
 ^
/root/pytorch/aten/src/ATen/native/Unique.cpp:208:27: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int64_t i = 0; i < indices.size(); ++i) {
 ~ ^ ~~~~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/Unique.cpp:256:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_unique_dim_cpu_template<float>' requested here
return _unique_dim_cpu_template<scalar_t>(self, dim, false, return_inverse, return_counts);
 ^
/root/pytorch/aten/src/ATen/native/Unique.cpp:208:27: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int64_t i = 0; i < indices.size(); ++i) {
 ~ ^ ~~~~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/Unique.cpp:256:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_unique_dim_cpu_template<int>' requested here
return _unique_dim_cpu_template<scalar_t>(self, dim, false, return_inverse, return_counts);
 ^
/root/pytorch/aten/src/ATen/native/Unique.cpp:208:27: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int64_t i = 0; i < indices.size(); ++i) {
 ~ ^ ~~~~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/Unique.cpp:256:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_unique_dim_cpu_template<long>' requested here
return _unique_dim_cpu_template<scalar_t>(self, dim, false, return_inverse, return_counts);
 ^
/root/pytorch/aten/src/ATen/native/Unique.cpp:208:27: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int64_t i = 0; i < indices.size(); ++i) {
 ~ ^ ~~~~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/Unique.cpp:256:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::_unique_dim_cpu_template<short>' requested here
return _unique_dim_cpu_template<scalar_t>(self, dim, false, return_inverse, return_counts);
 ^
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/UpSampleTrilinear3d.cpp.o
2 warnings generated.
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/VariableMethodStubs.cpp.o
/root/pytorch/aten/src/ATen/native/UnaryOps.cpp:144:13: warning: unused function 'propagate_names_if_namedtensor_enabled' [-Wunused-function]
static void propagate_names_if_namedtensor_enabled(Tensor& result, const Tensor& src) {
 ^
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/WeightNorm.cpp.o
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/layer_norm.cpp.o
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/sparse/SparseTensor.cpp.o
39 warnings generated.
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/sparse/SparseTensorMath.cpp.o
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/quantized/Copy.cpp.o
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/quantized/QTensor.cpp.o
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/quantized/TensorCompare.cpp.o
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/quantized/TensorFactories.cpp.o
2 warnings generated.
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/quantized/cpu/fake_quantize_per_tensor_affine.cpp.o
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/quantized/cpu/init_qnnpack.cpp.o
1 warning generated.
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/quantized/cpu/q_avgpool.cpp.o
2 warnings generated.
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/quantized/cpu/qadd.cpp.o
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/quantized/cpu/qconcat.cpp.o
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/quantized/cpu/qconv.cpp.o
/root/pytorch/aten/src/ATen/native/sparse/SparseTensor.cpp:204:27: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'long' [-Wsign-compare]
TORCH_CHECK(size.size() == sparse_dim + dense_dim,
 ~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/native/sparse/SparseTensorMath.cpp:879:17: warning: unused variable 'dense_dim' [-Wunused-variable]
const int64_t dense_dim = input.dense_dim();
 ^
/root/pytorch/aten/src/ATen/native/sparse/SparseTensorMath.cpp:917:29: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int64_t i = 0; i < dims_to_keep_v.size(); i++) {
 ~ ^ ~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/sparse/SparseTensorMath.cpp:1025:42: warning: comparison of integers of different signs: 'std::vector::size_type' (aka 'unsigned long') and 'long' [-Wsign-compare]
AT_ASSERT(dense_expand_size.size() == (input_values.dim() - 1));
 ~~~~~~~~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:349:54: note: expanded from macro 'AT_ASSERT'
C10_EXPAND_MSVC_WORKAROUND(TORCH_INTERNAL_ASSERT(__VA_ARGS__)); \
 ^~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:202:31: note: expanded from macro 'TORCH_INTERNAL_ASSERT'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/c10/util/Exception.h:146:39: note: expanded from macro 'C10_EXPAND_MSVC_WORKAROUND'
#define C10_EXPAND_MSVC_WORKAROUND(x) x
 ^
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/quantized/cpu/qconv_prepack.cpp.o
[ 60%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/quantized/cpu/qconv_unpack.cpp.o
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/quantized/cpu/qlinear.cpp.o
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/quantized/cpu/qlinear_prepack.cpp.o
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/quantized/cpu/qlinear_unpack.cpp.o
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/quantized/cpu/qnnpack_fc.cpp.o
In file included from /root/pytorch/aten/src/ATen/native/quantized/TensorCompare.cpp:6:
/root/pytorch/aten/src/ATen/native/ReduceOpsUtils.h:15:13: warning: unused function '_dimreduce_return_trivial' [-Wunused-function]
static bool _dimreduce_return_trivial(Tensor &result, const Tensor &self,
 ^
/root/pytorch/aten/src/ATen/native/ReduceOpsUtils.h:32:13: warning: unused function '_dimreduce_return_trivial_no_ident' [-Wunused-function]
static bool _dimreduce_return_trivial_no_ident(Tensor &result, const Tensor &self,
 ^
/root/pytorch/aten/src/ATen/native/ReduceOpsUtils.h:47:30: warning: unused function '_allreduce_return_trivial' [-Wunused-function]
static c10::optional<Tensor> _allreduce_return_trivial(
 ^
3 warnings generated.
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/quantized/cpu/qnnpack_relu.cpp.o
8 warnings generated.
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/quantized/cpu/qpool.cpp.o
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/qconv.cpp:4:
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:5:
/root/pytorch/third_party/fbgemm/include/fbgemm/QuantUtils.h:127:29: warning: comparison of integers of different signs: 'std::size_t' (aka 'unsigned long') and 'int' [-Wsign-compare]
for (std::size_t i = 0; i < len; i++) {
 ~ ^ ~~~
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/quantized/cpu/qrelu.cpp.o
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/qconv_prepack.cpp:4:
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:5:
/root/pytorch/third_party/fbgemm/include/fbgemm/QuantUtils.h:127:29: warning: comparison of integers of different signs: 'std::size_t' (aka 'unsigned long') and 'int' [-Wsign-compare]
for (std::size_t i = 0; i < len; i++) {
 ~ ^ ~~~
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/qconcat.cpp:2:
In file included from /root/pytorch/aten/src/ATen/core/op_registration/op_registration.h:8:
In file included from /root/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:3:
In file included from /root/pytorch/aten/src/ATen/core/dispatch/OperatorEntry.h:3:
In file included from /root/pytorch/aten/src/ATen/core/dispatch/DispatchTable.h:3:
In file included from /root/pytorch/aten/src/ATen/core/function_schema.h:3:
In file included from /root/pytorch/aten/src/ATen/core/jit_type.h:6:
In file included from /root/pytorch/aten/src/ATen/core/ivalue.h:569:
/root/pytorch/aten/src/ATen/core/ivalue_inl.h:473:10: warning: 'generic_to<at::Tensor>' is deprecated [-Wdeprecated-declarations]
return generic_to(std::move(*this), _fake_type<T>{});
 ^
/root/pytorch/aten/src/ATen/core/op_registration/kernel_functor.h:167:25: note: in instantiation of function template specialization 'c10::IValue::to<std::vector<at::Tensor, std::allocator<at::Tensor> > >' requested here
return std::move(v).to<T>();
 ^
/root/pytorch/aten/src/ATen/core/op_registration/kernel_functor.h:183:23: note: in instantiation of function template specialization 'c10::detail::ivalue_to_arg<std::vector<at::Tensor, std::allocator<at::Tensor> >, false>' requested here
return (*functor)(ivalue_to_arg<guts::remove_cv_t<guts::remove_reference_t<guts::typelist::element_t<ivalue_arg_indices, IValueArgTypes>>>, AllowDeprecatedTypes>(
 ^
/root/pytorch/aten/src/ATen/core/op_registration/kernel_functor.h:191:12: note: in instantiation of function template specialization 'c10::detail::call_functor_with_args_from_stack_<at::native::(anonymous namespace)::QCat<false>, false, 0, 1, 2, 3>' requested here
return call_functor_with_args_from_stack_<Functor, AllowDeprecatedTypes>(functor, stack, guts::make_index_sequence<num_ivalue_args>());
 ^
/root/pytorch/aten/src/ATen/core/op_registration/kernel_functor.h:223:21: note: in instantiation of function template specialization 'c10::detail::call_functor_with_args_from_stack<at::native::(anonymous namespace)::QCat<false>, false>' requested here
auto output = call_functor_with_args_from_stack<KernelFunctor, AllowDeprecatedTypes>(functor, stack);
 ^
/root/pytorch/aten/src/ATen/core/op_registration/op_registration.h:276:76: note: in instantiation of member function 'c10::detail::wrap_kernel_functor<at::native::(anonymous namespace)::QCat<false>, false, void>::call' requested here
&detail::wrap_kernel_functor<KernelFunctor, AllowDeprecatedTypes>::call,
 ^
/root/pytorch/aten/src/ATen/core/op_registration/op_registration.h:103:31: note: in instantiation of function template specialization 'c10::RegisterOperators::Options::kernelFunctor<at::native::(anonymous namespace)::QCat<false>, false>' requested here
return std::move(*this).kernelFunctor<KernelFunctor, false>(dispatch_key, std::forward<ConstructorParameters>(constructorParameters)...);
 ^
/root/pytorch/aten/src/ATen/native/quantized/cpu/qconcat.cpp:78:8: note: in instantiation of function template specialization 'c10::RegisterOperators::Options::kernel<at::native::(anonymous namespace)::QCat<false>>' requested here
.kernel<QCat<false>>(QuantizedCPUTensorId()))
 ^
/root/pytorch/aten/src/ATen/core/ivalue_inl.h:417:1: note: 'generic_to<at::Tensor>' has been explicitly marked deprecated here
C10_DEPRECATED_MESSAGE("IValues based on std::vector<T> are potentially slow and deprecated. Please use c10::List<T> instead.")
^
/root/pytorch/c10/util/Deprecated.h:32:57: note: expanded from macro 'C10_DEPRECATED_MESSAGE'
# define C10_DEPRECATED_MESSAGE(message) __attribute__((deprecated))
 ^
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/qconv_unpack.cpp:4:
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:5:
/root/pytorch/third_party/fbgemm/include/fbgemm/QuantUtils.h:127:29: warning: comparison of integers of different signs: 'std::size_t' (aka 'unsigned long') and 'int' [-Wsign-compare]
for (std::size_t i = 0; i < len; i++) {
 ~ ^ ~~~
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/qlinear_unpack.cpp:4:
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:5:
/root/pytorch/third_party/fbgemm/include/fbgemm/QuantUtils.h:127:29: warning: comparison of integers of different signs: 'std::size_t' (aka 'unsigned long') and 'int' [-Wsign-compare]
for (std::size_t i = 0; i < len; i++) {
 ~ ^ ~~~
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/qconv.cpp:4:
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:4:
/root/pytorch/third_party/fbgemm/include/fbgemm/Fbgemm.h:1431:14: warning: unused function 'fbgemmAlignedAlloc' [-Wunused-function]
static void* fbgemmAlignedAlloc(size_t __align, size_t __size) {
 ^
/root/pytorch/third_party/fbgemm/include/fbgemm/Fbgemm.h:1460:13: warning: unused function 'fbgemmGetRange' [-Wunused-function]
static void fbgemmGetRange(
 ^
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/qconv.cpp:4:
/root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:32:13: warning: unused function 'convert_uint8_int8' [-Wunused-function]
static void convert_uint8_int8(
 ^
/root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:42:13: warning: unused function 'convert_int8_uint8' [-Wunused-function]
static void convert_int8_uint8(
 ^
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/qlinear.cpp:4:
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:5:
/root/pytorch/third_party/fbgemm/include/fbgemm/QuantUtils.h:127:29: warning: comparison of integers of different signs: 'std::size_t' (aka 'unsigned long') and 'int' [-Wsign-compare]
for (std::size_t i = 0; i < len; i++) {
 ~ ^ ~~~
1 warning generated.
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/mkl/LinearAlgebra.cpp.o
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/qconv_prepack.cpp:4:
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:4:
/root/pytorch/third_party/fbgemm/include/fbgemm/Fbgemm.h:1431:14: warning: unused function 'fbgemmAlignedAlloc' [-Wunused-function]
static void* fbgemmAlignedAlloc(size_t __align, size_t __size) {
 ^
/root/pytorch/third_party/fbgemm/include/fbgemm/Fbgemm.h:1460:13: warning: unused function 'fbgemmGetRange' [-Wunused-function]
static void fbgemmGetRange(
 ^
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/qconv_prepack.cpp:4:
/root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:32:13: warning: unused function 'convert_uint8_int8' [-Wunused-function]
static void convert_uint8_int8(
 ^
/root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:42:13: warning: unused function 'convert_int8_uint8' [-Wunused-function]
static void convert_int8_uint8(
 ^
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/qlinear_prepack.cpp:4:
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:5:
/root/pytorch/third_party/fbgemm/include/fbgemm/QuantUtils.h:127:29: warning: comparison of integers of different signs: 'std::size_t' (aka 'unsigned long') and 'int' [-Wsign-compare]
for (std::size_t i = 0; i < len; i++) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/quantized/cpu/qlinear_prepack.cpp:34:26: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int' [-Wsign-compare]
for (size_t i = 0; i < N; ++i) {
 ~ ^ ~
/root/pytorch/aten/src/ATen/native/quantized/cpu/qlinear_prepack.cpp:36:28: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int' [-Wsign-compare]
for (size_t j = 0; j < K; ++j) {
 ~ ^ ~
/root/pytorch/aten/src/ATen/native/quantized/cpu/qnnpack_fc.cpp:30:26: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'long' [-Wsign-compare]
for (size_t i = 0; i < input_contig.dim() - 1; ++i) {
 ~ ^ ~~~~~~~~~~~~~~~~~~~~~~
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/qlinear_unpack.cpp:4:
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:4:
/root/pytorch/third_party/fbgemm/include/fbgemm/Fbgemm.h:1431:14: warning: unused function 'fbgemmAlignedAlloc' [-Wunused-function]
static void* fbgemmAlignedAlloc(size_t __align, size_t __size) {
 ^
/root/pytorch/third_party/fbgemm/include/fbgemm/Fbgemm.h:1460:13: warning: unused function 'fbgemmGetRange' [-Wunused-function]
static void fbgemmGetRange(
 ^
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/qlinear_unpack.cpp:4:
/root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:32:13: warning: unused function 'convert_uint8_int8' [-Wunused-function]
static void convert_uint8_int8(
 ^
/root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:42:13: warning: unused function 'convert_int8_uint8' [-Wunused-function]
static void convert_int8_uint8(
 ^
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/qconv_unpack.cpp:4:
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:4:
/root/pytorch/third_party/fbgemm/include/fbgemm/Fbgemm.h:1431:14: warning: unused function 'fbgemmAlignedAlloc' [-Wunused-function]
static void* fbgemmAlignedAlloc(size_t __align, size_t __size) {
 ^
/root/pytorch/third_party/fbgemm/include/fbgemm/Fbgemm.h:1460:13: warning: unused function 'fbgemmGetRange' [-Wunused-function]
static void fbgemmGetRange(
 ^
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/qconv_unpack.cpp:4:
/root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:32:13: warning: unused function 'convert_uint8_int8' [-Wunused-function]
static void convert_uint8_int8(
 ^
/root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:42:13: warning: unused function 'convert_int8_uint8' [-Wunused-function]
static void convert_int8_uint8(
 ^
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/mkl/SpectralOps.cpp.o
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/qlinear.cpp:4:
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:4:
/root/pytorch/third_party/fbgemm/include/fbgemm/Fbgemm.h:1431:14: warning: unused function 'fbgemmAlignedAlloc' [-Wunused-function]
static void* fbgemmAlignedAlloc(size_t __align, size_t __size) {
 ^
/root/pytorch/third_party/fbgemm/include/fbgemm/Fbgemm.h:1460:13: warning: unused function 'fbgemmGetRange' [-Wunused-function]
static void fbgemmGetRange(
 ^
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/qlinear.cpp:4:
/root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:32:13: warning: unused function 'convert_uint8_int8' [-Wunused-function]
static void convert_uint8_int8(
 ^
/root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:42:13: warning: unused function 'convert_int8_uint8' [-Wunused-function]
static void convert_int8_uint8(
 ^
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/qlinear_prepack.cpp:4:
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:4:
/root/pytorch/third_party/fbgemm/include/fbgemm/Fbgemm.h:1431:14: warning: unused function 'fbgemmAlignedAlloc' [-Wunused-function]
static void* fbgemmAlignedAlloc(size_t __align, size_t __size) {
 ^
/root/pytorch/third_party/fbgemm/include/fbgemm/Fbgemm.h:1460:13: warning: unused function 'fbgemmGetRange' [-Wunused-function]
static void fbgemmGetRange(
 ^
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/qlinear_prepack.cpp:4:
/root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:32:13: warning: unused function 'convert_uint8_int8' [-Wunused-function]
static void convert_uint8_int8(
 ^
/root/pytorch/aten/src/ATen/native/quantized/cpu/fbgemm_utils.h:42:13: warning: unused function 'convert_int8_uint8' [-Wunused-function]
static void convert_int8_uint8(
 ^
5 warnings generated.
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/mkldnn/BinaryOps.cpp.o
5 warnings generated.
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/mkldnn/Conv.cpp.o
3 warnings generated.
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/mkldnn/IDeepRegistration.cpp.o
5 warnings generated.
In file included from /root/pytorch/aten/src/ATen/native/quantized/cpu/qpool.cpp:3:
In file included from /root/pytorch/aten/src/ATen/Parallel.h:3:
In file included from /root/pytorch/aten/src/ATen/core/ivalue.h:569:
/root/pytorch/aten/src/ATen/core/ivalue_inl.h:473:10: warning: 'generic_to<long>' is deprecated [-Wdeprecated-declarations]
return generic_to(std::move(*this), _fake_type<T>{});
 ^
/root/pytorch/aten/src/ATen/core/op_registration/kernel_functor.h:167:25: note: in instantiation of function template specialization 'c10::IValue::to<std::vector<long, std::allocator<long> > >' requested here
return std::move(v).to<T>();
 ^
/root/pytorch/aten/src/ATen/core/op_registration/kernel_functor.h:183:23: note: in instantiation of function template specialization 'c10::detail::ivalue_to_arg<std::vector<long, std::allocator<long> >, false>' requested here
return (*functor)(ivalue_to_arg<guts::remove_cv_t<guts::remove_reference_t<guts::typelist::element_t<ivalue_arg_indices, IValueArgTypes>>>, AllowDeprecatedTypes>(
 ^
/root/pytorch/aten/src/ATen/core/op_registration/kernel_functor.h:191:12: note: in instantiation of function template specialization 'c10::detail::call_functor_with_args_from_stack_<at::native::(anonymous namespace)::QMaxPool2D_arr_args, false, 0, 1, 2, 3, 4>' requested here
return call_functor_with_args_from_stack_<Functor, AllowDeprecatedTypes>(functor, stack, guts::make_index_sequence<num_ivalue_args>());
 ^
/root/pytorch/aten/src/ATen/core/op_registration/kernel_functor.h:223:21: note: in instantiation of function template specialization 'c10::detail::call_functor_with_args_from_stack<at::native::(anonymous namespace)::QMaxPool2D_arr_args, false>' requested here
auto output = call_functor_with_args_from_stack<KernelFunctor, AllowDeprecatedTypes>(functor, stack);
 ^
/root/pytorch/aten/src/ATen/core/op_registration/op_registration.h:276:76: note: in instantiation of member function 'c10::detail::wrap_kernel_functor<at::native::(anonymous namespace)::QMaxPool2D_arr_args, false, void>::call' requested here
&detail::wrap_kernel_functor<KernelFunctor, AllowDeprecatedTypes>::call,
 ^
/root/pytorch/aten/src/ATen/core/op_registration/op_registration.h:103:31: note: in instantiation of function template specialization 'c10::RegisterOperators::Options::kernelFunctor<at::native::(anonymous namespace)::QMaxPool2D_arr_args, false>' requested here
return std::move(*this).kernelFunctor<KernelFunctor, false>(dispatch_key, std::forward<ConstructorParameters>(constructorParameters)...);
 ^
/root/pytorch/aten/src/ATen/native/quantized/cpu/qpool.cpp:198:6: note: in instantiation of function template specialization 'c10::RegisterOperators::Options::kernel<at::native::(anonymous namespace)::QMaxPool2D_arr_args>' requested here
.kernel<QMaxPool2D_arr_args>(QuantizedCPUTensorId()));
 ^
/root/pytorch/aten/src/ATen/core/ivalue_inl.h:417:1: note: 'generic_to<long>' has been explicitly marked deprecated here
C10_DEPRECATED_MESSAGE("IValues based on std::vector<T> are potentially slow and deprecated. Please use c10::List<T> instead.")
^
/root/pytorch/c10/util/Deprecated.h:32:57: note: expanded from macro 'C10_DEPRECATED_MESSAGE'
# define C10_DEPRECATED_MESSAGE(message) __attribute__((deprecated))
 ^
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/mkldnn/Linear.cpp.o
7 warnings generated.
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/mkldnn/MKLDNNCommon.cpp.o
1 warning generated.
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/mkldnn/MKLDNNConversions.cpp.o
1 warning generated.
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/mkldnn/MkldnnTensorMath.cpp.o
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/mkldnn/Normalization.cpp.o
5 warnings generated.
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/mkldnn/Pooling.cpp.o
5 warnings generated.
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/mkldnn/Relu.cpp.o
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/mkldnn/SoftMax.cpp.o
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/mkldnn/TensorFactories.cpp.o
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/mkldnn/TensorShape.cpp.o
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/mkldnn/UnaryOps.cpp.o
1 warning generated.
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/mkldnn/Utils.cpp.o
/root/pytorch/aten/src/ATen/native/mkldnn/Pooling.cpp:172:3: warning: 'deprecated_AT_CHECK' is deprecated [-Wdeprecated-declarations]
AT_CHECK(!divisor_override.has_value(),
 ^
/root/pytorch/c10/util/Exception.h:339:20: note: expanded from macro 'AT_CHECK'
::c10::detail::deprecated_AT_CHECK(); \
 ^
/root/pytorch/c10/util/Exception.h:313:1: note: 'deprecated_AT_CHECK' has been explicitly marked deprecated here
C10_DEPRECATED_MESSAGE("AT_CHECK is deprecated, use TORCH_CHECK instead.")
^
/root/pytorch/c10/util/Deprecated.h:32:57: note: expanded from macro 'C10_DEPRECATED_MESSAGE'
# define C10_DEPRECATED_MESSAGE(message) __attribute__((deprecated))
 ^
/root/pytorch/aten/src/ATen/native/mkldnn/Pooling.cpp:206:24: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t i = 2; i < input.dim(); ++i) {
 ~ ^ ~~~~~~~~~~~
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/CPUType.cpp.o
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/LegacyTHFunctionsCPU.cpp.o
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/MkldnnCPUType.cpp.o
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/QuantizedCPUType.cpp.o
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/SparseCPUType.cpp.o
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/TypeDefault.cpp.o
[ 61%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/TH/THGeneral.cpp.o
2 warnings generated.
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/TH/THAllocator.cpp.o
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/TH/THSize.cpp.o
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/TH/THStorageFunctions.cpp.o
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/TH/THTensor.cpp.o
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/TH/THTensorRandom.cpp.o
/root/pytorch/aten/src/TH/THAllocator.cpp:253:16: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and '__off_t' (aka 'long') [-Wsign-compare]
if (size > file_stat.st_size) {
 ~~~~ ^ ~~~~~~~~~~~~~~~~~
/root/pytorch/aten/src/TH/THAllocator.cpp:258:64: warning: comparison of integers of different signs: '__off_t' (aka 'long') and 'size_t' (aka 'unsigned long') [-Wsign-compare]
if (fstat(fd, &file_stat) == -1 || file_stat.st_size < size) {
 ~~~~~~~~~~~~~~~~~ ^ ~~~~
/root/pytorch/aten/src/TH/THAllocator.cpp:433:9: warning: unused variable 'data' [-Wunused-variable]
char *data = ((char*)base_ptr_) + TH_ALLOC_ALIGNMENT;
 ^
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/TH/THTensorFill.cpp.o
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/TH/THTensorMath.cpp.o
3 warnings generated.
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/TH/THTensorMoreMath.cpp.o
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/TH/THTensorEvenMoreMath.cpp.o
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/TH/THTensorConv.cpp.o
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/TH/THTensorLapack.cpp.o
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/TH/THBlas.cpp.o
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/TH/THLapack.cpp.o
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/TH/THLogAdd.cpp.o
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/TH/THFile.cpp.o
In file included from /root/pytorch/aten/src/TH/THTensor.cpp:4:
In file included from /root/pytorch/aten/src/TH/THGenerateAllTypes.h:10:
In file included from /root/pytorch/aten/src/TH/THGenerateFloatTypes.h:10:
In file included from TH/generic/THTensor.cpp:1:
/root/pytorch/aten/src/TH/generic/THTensor.cpp:771:35: warning: comparison of integers of different signs: 'int' and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int i = dimension + 1; i < size.size(); ++i) {
 ~ ^ ~~~~~~~~~~~
In file included from /root/pytorch/aten/src/TH/THTensor.cpp:4:
In file included from /root/pytorch/aten/src/TH/THGenerateAllTypes.h:10:
In file included from /root/pytorch/aten/src/TH/THGenerateFloatTypes.h:11:
In file included from TH/generic/THTensor.cpp:1:
/root/pytorch/aten/src/TH/generic/THTensor.cpp:771:35: warning: comparison of integers of different signs: 'int' and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int i = dimension + 1; i < size.size(); ++i) {
 ~ ^ ~~~~~~~~~~~
In file included from /root/pytorch/aten/src/TH/THTensor.cpp:4:
In file included from /root/pytorch/aten/src/TH/THGenerateAllTypes.h:11:
In file included from /root/pytorch/aten/src/TH/THGenerateIntTypes.h:10:
In file included from TH/generic/THTensor.cpp:1:
/root/pytorch/aten/src/TH/generic/THTensor.cpp:771:35: warning: comparison of integers of different signs: 'int' and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int i = dimension + 1; i < size.size(); ++i) {
 ~ ^ ~~~~~~~~~~~
In file included from /root/pytorch/aten/src/TH/THTensor.cpp:4:
In file included from /root/pytorch/aten/src/TH/THGenerateAllTypes.h:11:
In file included from /root/pytorch/aten/src/TH/THGenerateIntTypes.h:11:
In file included from TH/generic/THTensor.cpp:1:
/root/pytorch/aten/src/TH/generic/THTensor.cpp:771:35: warning: comparison of integers of different signs: 'int' and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int i = dimension + 1; i < size.size(); ++i) {
 ~ ^ ~~~~~~~~~~~
In file included from /root/pytorch/aten/src/TH/THTensor.cpp:4:
In file included from /root/pytorch/aten/src/TH/THGenerateAllTypes.h:11:
In file included from /root/pytorch/aten/src/TH/THGenerateIntTypes.h:12:
In file included from TH/generic/THTensor.cpp:1:
/root/pytorch/aten/src/TH/generic/THTensor.cpp:771:35: warning: comparison of integers of different signs: 'int' and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int i = dimension + 1; i < size.size(); ++i) {
 ~ ^ ~~~~~~~~~~~
In file included from /root/pytorch/aten/src/TH/THTensor.cpp:4:
In file included from /root/pytorch/aten/src/TH/THGenerateAllTypes.h:11:
In file included from /root/pytorch/aten/src/TH/THGenerateIntTypes.h:13:
In file included from TH/generic/THTensor.cpp:1:
/root/pytorch/aten/src/TH/generic/THTensor.cpp:771:35: warning: comparison of integers of different signs: 'int' and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int i = dimension + 1; i < size.size(); ++i) {
 ~ ^ ~~~~~~~~~~~
In file included from /root/pytorch/aten/src/TH/THTensor.cpp:4:
In file included from /root/pytorch/aten/src/TH/THGenerateAllTypes.h:11:
In file included from /root/pytorch/aten/src/TH/THGenerateIntTypes.h:14:
In file included from TH/generic/THTensor.cpp:1:
/root/pytorch/aten/src/TH/generic/THTensor.cpp:771:35: warning: comparison of integers of different signs: 'int' and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int i = dimension + 1; i < size.size(); ++i) {
 ~ ^ ~~~~~~~~~~~
In file included from /root/pytorch/aten/src/TH/THTensor.cpp:7:
In file included from TH/generic/THTensor.cpp:1:
/root/pytorch/aten/src/TH/generic/THTensor.cpp:771:35: warning: comparison of integers of different signs: 'int' and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int i = dimension + 1; i < size.size(); ++i) {
 ~ ^ ~~~~~~~~~~~
In file included from /root/pytorch/aten/src/TH/THTensor.cpp:10:
In file included from TH/generic/THTensor.cpp:1:
/root/pytorch/aten/src/TH/generic/THTensor.cpp:771:35: warning: comparison of integers of different signs: 'int' and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int i = dimension + 1; i < size.size(); ++i) {
 ~ ^ ~~~~~~~~~~~
In file included from /root/pytorch/aten/src/TH/THTensor.cpp:13:
In file included from TH/generic/THTensor.cpp:1:
/root/pytorch/aten/src/TH/generic/THTensor.cpp:771:35: warning: comparison of integers of different signs: 'int' and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int i = dimension + 1; i < size.size(); ++i) {
 ~ ^ ~~~~~~~~~~~
In file included from /root/pytorch/aten/src/TH/THTensor.cpp:4:
In file included from /root/pytorch/aten/src/TH/THGenerateAllTypes.h:10:
In file included from /root/pytorch/aten/src/TH/THGenerateFloatTypes.h:10:
In file included from TH/generic/THTensor.cpp:1:
In file included from /root/pytorch/aten/src/TH/generic/THTensor.cpp:5:
/root/pytorch/aten/src/ATen/InferSize.h:12:29: warning: unused function 'infer_size' [-Wunused-function]
static std::vector<int64_t> infer_size(IntArrayRef shape, int64_t numel) {
 ^
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/TH/THDiskFile.cpp.o
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/TH/THMemoryFile.cpp.o
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/TH/THVector.cpp.o
In file included from /root/pytorch/aten/src/TH/THTensorMath.cpp:7:
In file included from /root/pytorch/aten/src/TH/THGenerateAllTypes.h:10:
In file included from /root/pytorch/aten/src/TH/THGenerateFloatTypes.h:10:
In file included from TH/generic/THTensorMath.cpp:1:
/root/pytorch/aten/src/TH/generic/THTensorMath.cpp:703:11: warning: unused variable 'i' [-Wunused-variable]
int64_t i;
 ^
In file included from /root/pytorch/aten/src/TH/THTensorMath.cpp:7:
In file included from /root/pytorch/aten/src/TH/THGenerateAllTypes.h:10:
In file included from /root/pytorch/aten/src/TH/THGenerateFloatTypes.h:11:
In file included from TH/generic/THTensorMath.cpp:1:
/root/pytorch/aten/src/TH/generic/THTensorMath.cpp:703:11: warning: unused variable 'i' [-Wunused-variable]
int64_t i;
 ^
In file included from /root/pytorch/aten/src/TH/THTensorMath.cpp:7:
In file included from /root/pytorch/aten/src/TH/THGenerateAllTypes.h:11:
In file included from /root/pytorch/aten/src/TH/THGenerateIntTypes.h:10:
In file included from TH/generic/THTensorMath.cpp:1:
/root/pytorch/aten/src/TH/generic/THTensorMath.cpp:703:11: warning: unused variable 'i' [-Wunused-variable]
int64_t i;
 ^
In file included from /root/pytorch/aten/src/TH/THTensorMath.cpp:7:
In file included from /root/pytorch/aten/src/TH/THGenerateAllTypes.h:11:
In file included from /root/pytorch/aten/src/TH/THGenerateIntTypes.h:11:
In file included from TH/generic/THTensorMath.cpp:1:
/root/pytorch/aten/src/TH/generic/THTensorMath.cpp:703:11: warning: unused variable 'i' [-Wunused-variable]
int64_t i;
 ^
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/TH/vector/AVX.cpp.o
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/TH/vector/AVX2.cpp.o
In file included from /root/pytorch/aten/src/TH/THTensorMath.cpp:7:
In file included from /root/pytorch/aten/src/TH/THGenerateAllTypes.h:11:
In file included from /root/pytorch/aten/src/TH/THGenerateIntTypes.h:12:
In file included from TH/generic/THTensorMath.cpp:1:
/root/pytorch/aten/src/TH/generic/THTensorMath.cpp:703:11: warning: unused variable 'i' [-Wunused-variable]
int64_t i;
 ^
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/THNN/init.cpp.o
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/quantized/QTensorImpl.cpp.o
In file included from /root/pytorch/aten/src/TH/THTensorMath.cpp:7:
In file included from /root/pytorch/aten/src/TH/THGenerateAllTypes.h:11:
In file included from /root/pytorch/aten/src/TH/THGenerateIntTypes.h:13:
In file included from TH/generic/THTensorMath.cpp:1:
/root/pytorch/aten/src/TH/generic/THTensorMath.cpp:703:11: warning: unused variable 'i' [-Wunused-variable]
int64_t i;
 ^
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/quantized/Quantizer.cpp.o
In file included from /root/pytorch/aten/src/TH/THTensorMath.cpp:7:
In file included from /root/pytorch/aten/src/TH/THGenerateAllTypes.h:11:
In file included from /root/pytorch/aten/src/TH/THGenerateIntTypes.h:14:
In file included from TH/generic/THTensorMath.cpp:1:
/root/pytorch/aten/src/TH/generic/THTensorMath.cpp:703:11: warning: unused variable 'i' [-Wunused-variable]
int64_t i;
 ^
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/layer_norm_kernel.cpp.AVX2.cpp.o
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp.o
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/TensorCompareKernel.cpp.AVX2.cpp.o
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp.o
11 warnings generated.
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/SoftMaxKernel.cpp.AVX2.cpp.o
In file included from /root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:11:
/root/pytorch/third_party/fbgemm/include/fbgemm/QuantUtils.h:127:29: warning: comparison of integers of different signs: 'std::size_t' (aka 'unsigned long') and 'int' [-Wsign-compare]
for (std::size_t i = 0; i < len; i++) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:58:21: warning: comparison of integers of different signs: 'int' and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int i = 0; i < zero_points.size(); ++i) {
 ~ ^ ~~~~~~~~~~~~~~~~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:210:23: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(channel == scales.size(),
 ~~~~~~~ ^ ~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:212:23: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(channel == zero_points.size(),
 ~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:243:23: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(channel == scales.size(),
 ~~~~~~~ ^ ~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:245:23: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(channel == zero_points.size(),
 ~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
In file included from /root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:11:
/root/pytorch/third_party/fbgemm/include/fbgemm/QuantUtils.h:127:29: warning: comparison of integers of different signs: 'std::size_t' (aka 'unsigned long') and 'int' [-Wsign-compare]
for (std::size_t i = 0; i < len; i++) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:126:11: note: in instantiation of function template specialization 'fbgemm::Dequantize<signed char>' requested here
fbgemm::Dequantize<typename T::underlying>(/*src=*/qd,
 ^
In file included from /root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:11:
/root/pytorch/third_party/fbgemm/include/fbgemm/QuantUtils.h:127:29: warning: comparison of integers of different signs: 'std::size_t' (aka 'unsigned long') and 'int' [-Wsign-compare]
for (std::size_t i = 0; i < len; i++) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:126:11: note: in instantiation of function template specialization 'fbgemm::Dequantize<unsigned char>' requested here
fbgemm::Dequantize<typename T::underlying>(/*src=*/qd,
 ^
In file included from /root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:11:
/root/pytorch/third_party/fbgemm/include/fbgemm/QuantUtils.h:127:29: warning: comparison of integers of different signs: 'std::size_t' (aka 'unsigned long') and 'int' [-Wsign-compare]
for (std::size_t i = 0; i < len; i++) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:126:11: note: in instantiation of function template specialization 'fbgemm::Dequantize<int>' requested here
fbgemm::Dequantize<typename T::underlying>(/*src=*/qd,
 ^
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp.o
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:210:23: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(channel == scales.size(),
 ~~~~~~~ ^ ~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:369:15: note: in instantiation of function template specialization 'at::quantize_tensor_per_channel_affine<c10::qint8>' requested here
qtensor = quantize_tensor_per_channel_affine<scalar_t>(
 ^
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:212:23: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(channel == zero_points.size(),
 ~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:58:21: warning: comparison of integers of different signs: 'int' and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int i = 0; i < zero_points.size(); ++i) {
 ~ ^ ~~~~~~~~~~~~~~~~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:204:3: note: in instantiation of function template specialization 'at::checkZeroPoints<signed char>' requested here
checkZeroPoints<typename T::underlying>(fn_name, zero_points);
 ^
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:369:15: note: in instantiation of function template specialization 'at::quantize_tensor_per_channel_affine<c10::qint8>' requested here
qtensor = quantize_tensor_per_channel_affine<scalar_t>(
 ^
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:210:23: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(channel == scales.size(),
 ~~~~~~~ ^ ~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:369:15: note: in instantiation of function template specialization 'at::quantize_tensor_per_channel_affine<c10::quint8>' requested here
qtensor = quantize_tensor_per_channel_affine<scalar_t>(
 ^
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:212:23: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(channel == zero_points.size(),
 ~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:58:21: warning: comparison of integers of different signs: 'int' and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int i = 0; i < zero_points.size(); ++i) {
 ~ ^ ~~~~~~~~~~~~~~~~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:204:3: note: in instantiation of function template specialization 'at::checkZeroPoints<unsigned char>' requested here
checkZeroPoints<typename T::underlying>(fn_name, zero_points);
 ^
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:369:15: note: in instantiation of function template specialization 'at::quantize_tensor_per_channel_affine<c10::quint8>' requested here
qtensor = quantize_tensor_per_channel_affine<scalar_t>(
 ^
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:210:23: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(channel == scales.size(),
 ~~~~~~~ ^ ~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:369:15: note: in instantiation of function template specialization 'at::quantize_tensor_per_channel_affine<c10::qint32>' requested here
qtensor = quantize_tensor_per_channel_affine<scalar_t>(
 ^
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:212:23: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(channel == zero_points.size(),
 ~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:58:21: warning: comparison of integers of different signs: 'int' and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
for (int i = 0; i < zero_points.size(); ++i) {
 ~ ^ ~~~~~~~~~~~~~~~~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:204:3: note: in instantiation of function template specialization 'at::checkZeroPoints<int>' requested here
checkZeroPoints<typename T::underlying>(fn_name, zero_points);
 ^
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:369:15: note: in instantiation of function template specialization 'at::quantize_tensor_per_channel_affine<c10::qint32>' requested here
qtensor = quantize_tensor_per_channel_affine<scalar_t>(
 ^
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:243:23: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(channel == scales.size(),
 ~~~~~~~ ^ ~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:387:15: note: in instantiation of function template specialization 'at::dequantize_tensor_per_channel_affine<c10::qint8>' requested here
rtensor = dequantize_tensor_per_channel_affine<scalar_t>(
 ^
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:245:23: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(channel == zero_points.size(),
 ~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:243:23: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(channel == scales.size(),
 ~~~~~~~ ^ ~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:387:15: note: in instantiation of function template specialization 'at::dequantize_tensor_per_channel_affine<c10::quint8>' requested here
rtensor = dequantize_tensor_per_channel_affine<scalar_t>(
 ^
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:245:23: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(channel == zero_points.size(),
 ~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:243:23: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(channel == scales.size(),
 ~~~~~~~ ^ ~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:387:15: note: in instantiation of function template specialization 'at::dequantize_tensor_per_channel_affine<c10::qint32>' requested here
rtensor = dequantize_tensor_per_channel_affine<scalar_t>(
 ^
/root/pytorch/aten/src/ATen/quantized/Quantizer.cpp:245:23: warning: comparison of integers of different signs: 'int64_t' (aka 'long') and 'std::vector::size_type' (aka 'unsigned long') [-Wsign-compare]
TORCH_CHECK(channel == zero_points.size(),
 ~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:244:31: note: expanded from macro 'TORCH_CHECK'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
In file included from /root/pytorch/aten/src/THNN/init.cpp:125:
In file included from /root/pytorch/aten/src/TH/THGenerateFloatTypes.h:10:
In file included from THNN/generic/FeatureLPPooling.c:1:
/root/pytorch/aten/src/THNN/generic/FeatureLPPooling.c:225:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (batch = start; batch < end; ++batch) {
 ~~~~~ ^ ~~~
/root/pytorch/aten/src/THNN/generic/FeatureLPPooling.c:321:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (batch = start; batch < end; ++batch) {
 ~~~~~ ^ ~~~
In file included from /root/pytorch/aten/src/THNN/init.cpp:125:
In file included from /root/pytorch/aten/src/TH/THGenerateFloatTypes.h:11:
In file included from THNN/generic/FeatureLPPooling.c:1:
/root/pytorch/aten/src/THNN/generic/FeatureLPPooling.c:225:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (batch = start; batch < end; ++batch) {
 ~~~~~ ^ ~~~
/root/pytorch/aten/src/THNN/generic/FeatureLPPooling.c:321:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (batch = start; batch < end; ++batch) {
 ~~~~~ ^ ~~~
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/LerpKernel.cpp.AVX2.cpp.o
In file included from /root/pytorch/aten/src/THNN/init.cpp:140:
In file included from THNN/generic/VolumetricConvolutionMM.c:1:
/root/pytorch/aten/src/THNN/generic/VolumetricConvolutionMM.c:134:13: warning: unused function 'THNN_Longunfolded_acc_vol' [-Wunused-function]
static void THNN_(unfolded_acc_vol)(
 ^
/root/pytorch/aten/src/THNN/THNN.h:7:21: note: expanded from macro 'THNN_'
#define THNN_(NAME) TH_CONCAT_3(THNN_, Real, NAME)
 ^
/root/pytorch/build/caffe2/aten/src/TH/THGeneral.h:151:28: note: expanded from macro 'TH_CONCAT_3'
#define TH_CONCAT_3(x,y,z) TH_CONCAT_3_EXPAND(x,y,z)
 ^
/root/pytorch/build/caffe2/aten/src/TH/THGeneral.h:152:35: note: expanded from macro 'TH_CONCAT_3_EXPAND'
#define TH_CONCAT_3_EXPAND(x,y,z) x ## y ## z
 ^
<scratch space>:73:1: note: expanded from here
THNN_Longunfolded_acc_vol
^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:49:46: warning: taking the absolute value of unsigned type 'scalar_t' (aka 'unsigned char') has no effect [-Wabsolute-value]
[=](scalar_t a) -> scalar_t { return std::abs(a); },
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:49:46: note: remove the call to 'std::abs' since unsigned values cannot be negative
[=](scalar_t a) -> scalar_t { return std::abs(a); },
 ^~~~~~~~
/root/pytorch/aten/src/ATen/Dispatch.h:229:59: note: expanded from macro 'AT_DISPATCH_ALL_TYPES'
AT_PRIVATE_CASE_TYPE(at::ScalarType::Byte, uint8_t, __VA_ARGS__) \
 ^
/root/pytorch/aten/src/ATen/Dispatch.h:12:12: note: expanded from macro 'AT_PRIVATE_CASE_TYPE'
return __VA_ARGS__(); \
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:21:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:20:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:21:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:20:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:78:3: warning: unused variable 'out_ptr' [-Wunused-variable]
VEC_LOOP_HEADER(func_t, data)
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:16:9: note: expanded from macro 'VEC_LOOP_HEADER'
char* out_ptr = data[0];
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:92:3: warning: unused variable 'out_ptr' [-Wunused-variable]
VEC_LOOP_HEADER(func_t, data)
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:16:9: note: expanded from macro 'VEC_LOOP_HEADER'
char* out_ptr = data[0];
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:145:25: warning: comparison of integers of different signs: 'const int' and 'std::size_t' (aka 'unsigned long') [-Wsign-compare]
AT_ASSERT(num_outputs == result_size);
 ~~~~~~~~~~~ ^ ~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:349:54: note: expanded from macro 'AT_ASSERT'
C10_EXPAND_MSVC_WORKAROUND(TORCH_INTERNAL_ASSERT(__VA_ARGS__)); \
 ^~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:202:31: note: expanded from macro 'TORCH_INTERNAL_ASSERT'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/c10/util/Exception.h:146:39: note: expanded from macro 'C10_EXPAND_MSVC_WORKAROUND'
#define C10_EXPAND_MSVC_WORKAROUND(x) x
 ^
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:21:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:20:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:21:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:20:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:21:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:20:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:21:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:20:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
24 warnings generated.
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/IndexKernel.cpp.AVX2.cpp.o
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:21:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:20:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:43:13: warning: unused function '_reduction_with_indices_allocate_or_resize_output' [-Wunused-function]
static void _reduction_with_indices_allocate_or_resize_output(
 ^
/root/pytorch/aten/src/ATen/native/SortingUtils.h:83:13: warning: unused function '_allocate_or_resize_output_with_indices' [-Wunused-function]
static void _allocate_or_resize_output_with_indices(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:31:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:31:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:32:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:31:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:31:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:31:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:31:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:32:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:31:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:31:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:47:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:47:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:47:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:47:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3)>' requested here
cpu_kernel_vec(
 ^
[ 62%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/GridSamplerKernel.cpp.AVX2.cpp.o
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:47:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:47:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:47:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:46:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SoftMaxKernel.cpp.AVX2.cpp:9:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SoftMaxKernel.cpp.AVX2.cpp:9:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SoftMaxKernel.cpp.AVX2.cpp:9:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:8:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:215:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, float *dst, int64_t n) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:235:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, double *dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SoftMaxKernel.cpp.AVX2.cpp:9:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:75:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:75:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:76:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:75:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:75:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:75:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:75:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:76:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:75:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:75:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:84:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:84:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:85:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:84:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:84:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:84:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:84:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:85:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:84:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:84:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:94:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:94:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:94:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:94:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:94:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:94:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:94:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:93:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:177:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:177:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:178:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:177:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:177:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:177:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:177:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:178:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:177:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:177:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:145:25: warning: comparison of integers of different signs: 'const int' and 'std::size_t' (aka 'unsigned long') [-Wsign-compare]
AT_ASSERT(num_outputs == result_size);
 ~~~~~~~~~~~ ^ ~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:349:54: note: expanded from macro 'AT_ASSERT'
C10_EXPAND_MSVC_WORKAROUND(TORCH_INTERNAL_ASSERT(__VA_ARGS__)); \
 ^~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:202:31: note: expanded from macro 'TORCH_INTERNAL_ASSERT'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/c10/util/Exception.h:146:39: note: expanded from macro 'C10_EXPAND_MSVC_WORKAROUND'
#define C10_EXPAND_MSVC_WORKAROUND(x) x
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >::*)(at::native::WelfordData<double, long, double>, double) const>, double, double>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:134:9: warning: comparison of integers of different signs: 'unsigned long' and 'const int' [-Wsign-compare]
if (i < num_outputs) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:144:29: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >::*)(at::native::WelfordData<double, long, double>, double) const>, 0, double, double>' requested here
std::size_t result_size = for_each_in_tuple<traits>(result, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >::*)(at::native::WelfordData<double, long, double>, double) const>, double, double>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:134:9: warning: comparison of integers of different signs: 'unsigned long' and 'const int' [-Wsign-compare]
if (i < num_outputs) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:136:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >::*)(at::native::WelfordData<double, long, double>, double) const>, 1, double, double>' requested here
return for_each_in_tuple<traits, i + 1, tuple_t...>(t, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:144:29: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >::*)(at::native::WelfordData<double, long, double>, double) const>, 0, double, double>' requested here
std::size_t result_size = for_each_in_tuple<traits>(result, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >::*)(at::native::WelfordData<double, long, double>, double) const>, double, double>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:145:25: warning: comparison of integers of different signs: 'const int' and 'std::size_t' (aka 'unsigned long') [-Wsign-compare]
AT_ASSERT(num_outputs == result_size);
 ~~~~~~~~~~~ ^ ~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:349:54: note: expanded from macro 'AT_ASSERT'
C10_EXPAND_MSVC_WORKAROUND(TORCH_INTERNAL_ASSERT(__VA_ARGS__)); \
 ^~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:202:31: note: expanded from macro 'TORCH_INTERNAL_ASSERT'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/c10/util/Exception.h:146:39: note: expanded from macro 'C10_EXPAND_MSVC_WORKAROUND'
#define C10_EXPAND_MSVC_WORKAROUND(x) x
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >::*)(at::native::WelfordData<double, long, double>, float) const>, float, float>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:134:9: warning: comparison of integers of different signs: 'unsigned long' and 'const int' [-Wsign-compare]
if (i < num_outputs) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:144:29: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >::*)(at::native::WelfordData<double, long, double>, float) const>, 0, float, float>' requested here
std::size_t result_size = for_each_in_tuple<traits>(result, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >::*)(at::native::WelfordData<double, long, double>, float) const>, float, float>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:134:9: warning: comparison of integers of different signs: 'unsigned long' and 'const int' [-Wsign-compare]
if (i < num_outputs) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:136:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >::*)(at::native::WelfordData<double, long, double>, float) const>, 1, float, float>' requested here
return for_each_in_tuple<traits, i + 1, tuple_t...>(t, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:144:29: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >::*)(at::native::WelfordData<double, long, double>, float) const>, 0, float, float>' requested here
std::size_t result_size = for_each_in_tuple<traits>(result, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >::*)(at::native::WelfordData<double, long, double>, float) const>, float, float>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:145:25: warning: comparison of integers of different signs: 'const int' and 'std::size_t' (aka 'unsigned long') [-Wsign-compare]
AT_ASSERT(num_outputs == result_size);
 ~~~~~~~~~~~ ^ ~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:349:54: note: expanded from macro 'AT_ASSERT'
C10_EXPAND_MSVC_WORKAROUND(TORCH_INTERNAL_ASSERT(__VA_ARGS__)); \
 ^~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:202:31: note: expanded from macro 'TORCH_INTERNAL_ASSERT'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/c10/util/Exception.h:146:39: note: expanded from macro 'C10_EXPAND_MSVC_WORKAROUND'
#define C10_EXPAND_MSVC_WORKAROUND(x) x
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<c10::Half, double, long, double, std::tuple<c10::Half, c10::Half> >::*)(at::native::WelfordData<double, long, double>, c10::Half) const>, c10::Half, c10::Half>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<c10::Half, double, long, double, std::tuple<c10::Half, c10::Half> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:134:9: warning: comparison of integers of different signs: 'unsigned long' and 'const int' [-Wsign-compare]
if (i < num_outputs) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:144:29: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<c10::Half, double, long, double, std::tuple<c10::Half, c10::Half> >::*)(at::native::WelfordData<double, long, double>, c10::Half) const>, 0, c10::Half, c10::Half>' requested here
std::size_t result_size = for_each_in_tuple<traits>(result, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<c10::Half, double, long, double, std::tuple<c10::Half, c10::Half> >::*)(at::native::WelfordData<double, long, double>, c10::Half) const>, c10::Half, c10::Half>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<c10::Half, double, long, double, std::tuple<c10::Half, c10::Half> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:134:9: warning: comparison of integers of different signs: 'unsigned long' and 'const int' [-Wsign-compare]
if (i < num_outputs) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:136:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<c10::Half, double, long, double, std::tuple<c10::Half, c10::Half> >::*)(at::native::WelfordData<double, long, double>, c10::Half) const>, 1, c10::Half, c10::Half>' requested here
return for_each_in_tuple<traits, i + 1, tuple_t...>(t, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:144:29: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<c10::Half, double, long, double, std::tuple<c10::Half, c10::Half> >::*)(at::native::WelfordData<double, long, double>, c10::Half) const>, 0, c10::Half, c10::Half>' requested here
std::size_t result_size = for_each_in_tuple<traits>(result, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<c10::Half, double, long, double, std::tuple<c10::Half, c10::Half> >::*)(at::native::WelfordData<double, long, double>, c10::Half) const>, c10::Half, c10::Half>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<c10::Half, double, long, double, std::tuple<c10::Half, c10::Half> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp.o
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:10:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:10:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX2.cpp:10:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:6:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:6:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:6:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:8:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:215:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, float *dst, int64_t n) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:235:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, double *dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/LerpKernel.cpp.AVX2.cpp:6:
In file included from /root/pytorch/aten/src/ATen/native/cpu/Loops.h:34:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/LerpKernel.cpp.AVX2.cpp:6:
In file included from /root/pytorch/aten/src/ATen/native/cpu/Loops.h:34:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/LerpKernel.cpp.AVX2.cpp:6:
In file included from /root/pytorch/aten/src/ATen/native/cpu/Loops.h:34:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:8:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:215:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, float *dst, int64_t n) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:235:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, double *dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/LerpKernel.cpp.AVX2.cpp:6:
In file included from /root/pytorch/aten/src/ATen/native/cpu/Loops.h:34:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
18 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/DistanceOpsKernel.cpp.AVX2.cpp.o
18 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/CrossKernel.cpp.AVX2.cpp.o
/root/pytorch/build/aten/src/ATen/native/cpu/IndexKernel.cpp.AVX2.cpp:23:39: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
AT_ASSERT(original_strides.size() == num_indexers);
 ~~~~~~~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:349:54: note: expanded from macro 'AT_ASSERT'
C10_EXPAND_MSVC_WORKAROUND(TORCH_INTERNAL_ASSERT(__VA_ARGS__)); \
 ^~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:202:31: note: expanded from macro 'TORCH_INTERNAL_ASSERT'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/c10/util/Exception.h:146:39: note: expanded from macro 'C10_EXPAND_MSVC_WORKAROUND'
#define C10_EXPAND_MSVC_WORKAROUND(x) x
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/IndexKernel.cpp.AVX2.cpp:24:37: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
AT_ASSERT(original_sizes.size() == num_indexers);
 ~~~~~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:349:54: note: expanded from macro 'AT_ASSERT'
C10_EXPAND_MSVC_WORKAROUND(TORCH_INTERNAL_ASSERT(__VA_ARGS__)); \
 ^~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:202:31: note: expanded from macro 'TORCH_INTERNAL_ASSERT'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/c10/util/Exception.h:146:39: note: expanded from macro 'C10_EXPAND_MSVC_WORKAROUND'
#define C10_EXPAND_MSVC_WORKAROUND(x) x
 ^
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp.o
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/IndexKernel.cpp.AVX2.cpp:8:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/IndexKernel.cpp.AVX2.cpp:8:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/IndexKernel.cpp.AVX2.cpp:8:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:8:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:215:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, float *dst, int64_t n) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:235:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, double *dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/IndexKernel.cpp.AVX2.cpp:8:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:19:9), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:20:9)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:17:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:19:9), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:20:9)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:26:9), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:27:9)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:24:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:26:9), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:27:9)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:31:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:31:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:31:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:31:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:31:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:31:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:31:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:31:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:29:5)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:3:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:3:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:3:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:8:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:215:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, float *dst, int64_t n) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:235:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, double *dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX2.cpp:3:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
20 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp.o
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/GridSamplerKernel.cpp.AVX2.cpp:7:
In file included from /root/pytorch/aten/src/ATen/native/cpu/GridSamplerKernel.h:7:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/GridSamplerKernel.cpp.AVX2.cpp:7:
In file included from /root/pytorch/aten/src/ATen/native/cpu/GridSamplerKernel.h:7:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/GridSamplerKernel.cpp.AVX2.cpp:7:
In file included from /root/pytorch/aten/src/ATen/native/cpu/GridSamplerKernel.h:7:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:8:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:215:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, float *dst, int64_t n) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:235:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, double *dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/GridSamplerKernel.cpp.AVX2.cpp:7:
In file included from /root/pytorch/aten/src/ATen/native/cpu/GridSamplerKernel.h:7:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
28 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp.o
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/DistanceOpsKernel.cpp.AVX2.cpp:9:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/DistanceOpsKernel.cpp.AVX2.cpp:9:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/DistanceOpsKernel.cpp.AVX2.cpp:9:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:8:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:215:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, float *dst, int64_t n) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:235:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, double *dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/DistanceOpsKernel.cpp.AVX2.cpp:9:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:44:13: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:44:13: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:44:13: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:44:13: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:44:13: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:44:13: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:44:13: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:44:13: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:42:7)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CrossKernel.cpp.AVX2.cpp:10:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CrossKernel.cpp.AVX2.cpp:10:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CrossKernel.cpp.AVX2.cpp:10:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:8:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:215:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, float *dst, int64_t n) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:235:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, double *dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CrossKernel.cpp.AVX2.cpp:10:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:6:
In file included from /root/pytorch/aten/src/ATen/native/cpu/Loops.h:34:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:6:
In file included from /root/pytorch/aten/src/ATen/native/cpu/Loops.h:34:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:6:
In file included from /root/pytorch/aten/src/ATen/native/cpu/Loops.h:34:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:8:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:215:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, float *dst, int64_t n) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:235:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, double *dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX2.cpp:6:
In file included from /root/pytorch/aten/src/ATen/native/cpu/Loops.h:34:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
18 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/layer_norm_kernel.cpp.AVX.cpp.o
39 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp.o
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:24:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:24:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:24:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:24:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:24:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:24:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:24:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:21:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:42:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:30:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:42:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:42:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:42:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:30:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:42:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:42:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:42:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:41:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:30:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:30:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:61:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:61:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:62:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:61:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:61:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:30:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:61:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:61:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:62:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:61:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:61:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:30:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:8:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:215:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, float *dst, int64_t n) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:235:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, double *dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:5:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:30:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:26:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:179:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:179:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:181:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:179:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:179:3)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:179:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:179:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:181:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:179:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:179:3)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:193:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:193:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:195:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:193:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:193:3)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:193:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:193:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:195:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:193:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:193:3)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:9:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:9:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:9:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:8:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:215:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, float *dst, int64_t n) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_int.h:235:6: warning: unused function 'convert' [-Wunused-function]
void convert(const int32_t *src, double *dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX2.cpp:9:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:74:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:75:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, double, d)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:76:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int64_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:77:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int32_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:78:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
DEFINE_FLOAT_INT_CAST(int16_t, float, s)
^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:66:15: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<int_t> cast<int_t, float_t>(const Vec256<float_t>& src) { \
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:79:1: warning: unused function 'cast' [-Wunused-function]
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:70:17: note: expanded from macro 'DEFINE_FLOAT_INT_CAST'
Vec256<float_t> cast<float_t, int_t>(const Vec256<int_t>& src) { \
 ^
23 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/TensorCompareKernel.cpp.AVX.cpp.o
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:49:46: warning: taking the absolute value of unsigned type 'scalar_t' (aka 'unsigned char') has no effect [-Wabsolute-value]
[=](scalar_t a) -> scalar_t { return std::abs(a); },
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:49:46: note: remove the call to 'std::abs' since unsigned values cannot be negative
[=](scalar_t a) -> scalar_t { return std::abs(a); },
 ^~~~~~~~
/root/pytorch/aten/src/ATen/Dispatch.h:229:59: note: expanded from macro 'AT_DISPATCH_ALL_TYPES'
AT_PRIVATE_CASE_TYPE(at::ScalarType::Byte, uint8_t, __VA_ARGS__) \
 ^
/root/pytorch/aten/src/ATen/Dispatch.h:12:12: note: expanded from macro 'AT_PRIVATE_CASE_TYPE'
return __VA_ARGS__(); \
 ^
18 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp.o
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:31:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:31:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:32:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:31:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:31:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:31:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:31:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:32:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:31:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:31:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:47:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:47:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:47:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:47:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:47:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:47:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:47:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:46:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:75:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:75:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:76:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:75:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:75:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:75:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:75:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:76:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:75:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:75:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:84:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:84:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:85:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:84:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:84:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:84:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:84:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:85:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:84:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:84:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:94:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:94:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:94:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:94:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:94:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:94:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:94:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:93:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:177:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:177:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:178:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:177:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:177:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:177:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:177:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:178:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:177:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:177:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:10:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:10:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.AVX.cpp:10:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
29 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/SoftMaxKernel.cpp.AVX.cpp.o
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp.o
34 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/LerpKernel.cpp.AVX.cpp.o
14 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/IndexKernel.cpp.AVX.cpp.o
30 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/GridSamplerKernel.cpp.AVX.cpp.o
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:21:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:20:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:21:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:20:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:21:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:20:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:21:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:20:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:21:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:20:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:21:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:20:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:21:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:20:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:43:13: warning: unused function '_reduction_with_indices_allocate_or_resize_output' [-Wunused-function]
static void _reduction_with_indices_allocate_or_resize_output(
 ^
/root/pytorch/aten/src/ATen/native/SortingUtils.h:83:13: warning: unused function '_allocate_or_resize_output_with_indices' [-Wunused-function]
static void _allocate_or_resize_output_with_indices(
 ^
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp.o
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/DistanceOpsKernel.cpp.AVX.cpp.o
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SoftMaxKernel.cpp.AVX.cpp:9:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SoftMaxKernel.cpp.AVX.cpp:9:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SoftMaxKernel.cpp.AVX.cpp:9:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:78:3: warning: unused variable 'out_ptr' [-Wunused-variable]
VEC_LOOP_HEADER(func_t, data)
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:16:9: note: expanded from macro 'VEC_LOOP_HEADER'
char* out_ptr = data[0];
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:92:3: warning: unused variable 'out_ptr' [-Wunused-variable]
VEC_LOOP_HEADER(func_t, data)
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:16:9: note: expanded from macro 'VEC_LOOP_HEADER'
char* out_ptr = data[0];
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:145:25: warning: comparison of integers of different signs: 'const int' and 'std::size_t' (aka 'unsigned long') [-Wsign-compare]
AT_ASSERT(num_outputs == result_size);
 ~~~~~~~~~~~ ^ ~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:349:54: note: expanded from macro 'AT_ASSERT'
C10_EXPAND_MSVC_WORKAROUND(TORCH_INTERNAL_ASSERT(__VA_ARGS__)); \
 ^~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:202:31: note: expanded from macro 'TORCH_INTERNAL_ASSERT'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/c10/util/Exception.h:146:39: note: expanded from macro 'C10_EXPAND_MSVC_WORKAROUND'
#define C10_EXPAND_MSVC_WORKAROUND(x) x
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/LerpKernel.cpp.AVX.cpp:6:
In file included from /root/pytorch/aten/src/ATen/native/cpu/Loops.h:34:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/LerpKernel.cpp.AVX.cpp:6:
In file included from /root/pytorch/aten/src/ATen/native/cpu/Loops.h:34:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/LerpKernel.cpp.AVX.cpp:6:
In file included from /root/pytorch/aten/src/ATen/native/cpu/Loops.h:34:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
26 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/CrossKernel.cpp.AVX.cpp.o
/root/pytorch/build/aten/src/ATen/native/cpu/IndexKernel.cpp.AVX.cpp:23:39: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
AT_ASSERT(original_strides.size() == num_indexers);
 ~~~~~~~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:349:54: note: expanded from macro 'AT_ASSERT'
C10_EXPAND_MSVC_WORKAROUND(TORCH_INTERNAL_ASSERT(__VA_ARGS__)); \
 ^~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:202:31: note: expanded from macro 'TORCH_INTERNAL_ASSERT'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/c10/util/Exception.h:146:39: note: expanded from macro 'C10_EXPAND_MSVC_WORKAROUND'
#define C10_EXPAND_MSVC_WORKAROUND(x) x
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/IndexKernel.cpp.AVX.cpp:24:37: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
AT_ASSERT(original_sizes.size() == num_indexers);
 ~~~~~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:349:54: note: expanded from macro 'AT_ASSERT'
C10_EXPAND_MSVC_WORKAROUND(TORCH_INTERNAL_ASSERT(__VA_ARGS__)); \
 ^~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:202:31: note: expanded from macro 'TORCH_INTERNAL_ASSERT'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/c10/util/Exception.h:146:39: note: expanded from macro 'C10_EXPAND_MSVC_WORKAROUND'
#define C10_EXPAND_MSVC_WORKAROUND(x) x
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:145:25: warning: comparison of integers of different signs: 'const int' and 'std::size_t' (aka 'unsigned long') [-Wsign-compare]
AT_ASSERT(num_outputs == result_size);
 ~~~~~~~~~~~ ^ ~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:349:54: note: expanded from macro 'AT_ASSERT'
C10_EXPAND_MSVC_WORKAROUND(TORCH_INTERNAL_ASSERT(__VA_ARGS__)); \
 ^~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:202:31: note: expanded from macro 'TORCH_INTERNAL_ASSERT'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/c10/util/Exception.h:146:39: note: expanded from macro 'C10_EXPAND_MSVC_WORKAROUND'
#define C10_EXPAND_MSVC_WORKAROUND(x) x
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >::*)(at::native::WelfordData<double, long, double>, double) const>, double, double>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:134:9: warning: comparison of integers of different signs: 'unsigned long' and 'const int' [-Wsign-compare]
if (i < num_outputs) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:144:29: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >::*)(at::native::WelfordData<double, long, double>, double) const>, 0, double, double>' requested here
std::size_t result_size = for_each_in_tuple<traits>(result, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >::*)(at::native::WelfordData<double, long, double>, double) const>, double, double>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:134:9: warning: comparison of integers of different signs: 'unsigned long' and 'const int' [-Wsign-compare]
if (i < num_outputs) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:136:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >::*)(at::native::WelfordData<double, long, double>, double) const>, 1, double, double>' requested here
return for_each_in_tuple<traits, i + 1, tuple_t...>(t, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:144:29: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >::*)(at::native::WelfordData<double, long, double>, double) const>, 0, double, double>' requested here
std::size_t result_size = for_each_in_tuple<traits>(result, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >::*)(at::native::WelfordData<double, long, double>, double) const>, double, double>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:145:25: warning: comparison of integers of different signs: 'const int' and 'std::size_t' (aka 'unsigned long') [-Wsign-compare]
AT_ASSERT(num_outputs == result_size);
 ~~~~~~~~~~~ ^ ~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:349:54: note: expanded from macro 'AT_ASSERT'
C10_EXPAND_MSVC_WORKAROUND(TORCH_INTERNAL_ASSERT(__VA_ARGS__)); \
 ^~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:202:31: note: expanded from macro 'TORCH_INTERNAL_ASSERT'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/c10/util/Exception.h:146:39: note: expanded from macro 'C10_EXPAND_MSVC_WORKAROUND'
#define C10_EXPAND_MSVC_WORKAROUND(x) x
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >::*)(at::native::WelfordData<double, long, double>, float) const>, float, float>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:134:9: warning: comparison of integers of different signs: 'unsigned long' and 'const int' [-Wsign-compare]
if (i < num_outputs) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:144:29: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >::*)(at::native::WelfordData<double, long, double>, float) const>, 0, float, float>' requested here
std::size_t result_size = for_each_in_tuple<traits>(result, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >::*)(at::native::WelfordData<double, long, double>, float) const>, float, float>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:134:9: warning: comparison of integers of different signs: 'unsigned long' and 'const int' [-Wsign-compare]
if (i < num_outputs) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:136:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >::*)(at::native::WelfordData<double, long, double>, float) const>, 1, float, float>' requested here
return for_each_in_tuple<traits, i + 1, tuple_t...>(t, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:144:29: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >::*)(at::native::WelfordData<double, long, double>, float) const>, 0, float, float>' requested here
std::size_t result_size = for_each_in_tuple<traits>(result, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >::*)(at::native::WelfordData<double, long, double>, float) const>, float, float>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:145:25: warning: comparison of integers of different signs: 'const int' and 'std::size_t' (aka 'unsigned long') [-Wsign-compare]
AT_ASSERT(num_outputs == result_size);
 ~~~~~~~~~~~ ^ ~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:349:54: note: expanded from macro 'AT_ASSERT'
C10_EXPAND_MSVC_WORKAROUND(TORCH_INTERNAL_ASSERT(__VA_ARGS__)); \
 ^~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:202:31: note: expanded from macro 'TORCH_INTERNAL_ASSERT'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/c10/util/Exception.h:146:39: note: expanded from macro 'C10_EXPAND_MSVC_WORKAROUND'
#define C10_EXPAND_MSVC_WORKAROUND(x) x
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<c10::Half, double, long, double, std::tuple<c10::Half, c10::Half> >::*)(at::native::WelfordData<double, long, double>, c10::Half) const>, c10::Half, c10::Half>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<c10::Half, double, long, double, std::tuple<c10::Half, c10::Half> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:134:9: warning: comparison of integers of different signs: 'unsigned long' and 'const int' [-Wsign-compare]
if (i < num_outputs) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:144:29: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<c10::Half, double, long, double, std::tuple<c10::Half, c10::Half> >::*)(at::native::WelfordData<double, long, double>, c10::Half) const>, 0, c10::Half, c10::Half>' requested here
std::size_t result_size = for_each_in_tuple<traits>(result, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<c10::Half, double, long, double, std::tuple<c10::Half, c10::Half> >::*)(at::native::WelfordData<double, long, double>, c10::Half) const>, c10::Half, c10::Half>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<c10::Half, double, long, double, std::tuple<c10::Half, c10::Half> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:134:9: warning: comparison of integers of different signs: 'unsigned long' and 'const int' [-Wsign-compare]
if (i < num_outputs) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:136:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<c10::Half, double, long, double, std::tuple<c10::Half, c10::Half> >::*)(at::native::WelfordData<double, long, double>, c10::Half) const>, 1, c10::Half, c10::Half>' requested here
return for_each_in_tuple<traits, i + 1, tuple_t...>(t, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:144:29: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<c10::Half, double, long, double, std::tuple<c10::Half, c10::Half> >::*)(at::native::WelfordData<double, long, double>, c10::Half) const>, 0, c10::Half, c10::Half>' requested here
std::size_t result_size = for_each_in_tuple<traits>(result, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<c10::Half, double, long, double, std::tuple<c10::Half, c10::Half> >::*)(at::native::WelfordData<double, long, double>, c10::Half) const>, c10::Half, c10::Half>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<c10::Half, double, long, double, std::tuple<c10::Half, c10::Half> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
4 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp.o
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/IndexKernel.cpp.AVX.cpp:8:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/IndexKernel.cpp.AVX.cpp:8:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/IndexKernel.cpp.AVX.cpp:8:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:6:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:6:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
4 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp.o
27 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp.o
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/GridSamplerKernel.cpp.AVX.cpp:7:
In file included from /root/pytorch/aten/src/ATen/native/cpu/GridSamplerKernel.h:7:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/GridSamplerKernel.cpp.AVX.cpp:7:
In file included from /root/pytorch/aten/src/ATen/native/cpu/GridSamplerKernel.h:7:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/GridSamplerKernel.cpp.AVX.cpp:7:
In file included from /root/pytorch/aten/src/ATen/native/cpu/GridSamplerKernel.h:7:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
6 warnings generated.
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:19:9), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:20:9)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:17:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:19:9), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:20:9)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:26:9), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:27:9)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:24:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:26:9), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:27:9)>' requested here
cpu_kernel_vec(
 ^
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/layer_norm_kernel.cpp.DEFAULT.cpp.o
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:31:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:31:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:31:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:31:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:31:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:31:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:31:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:31:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:29:5)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:3:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:3:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/FillKernel.cpp.AVX.cpp:3:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:44:13: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:44:13: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:44:13: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:44:13: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:44:13: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:44:13: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:44:13: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:6:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:44:13: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:42:7)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/DistanceOpsKernel.cpp.AVX.cpp:9:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/DistanceOpsKernel.cpp.AVX.cpp:9:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/DistanceOpsKernel.cpp.AVX.cpp:9:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
14 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp.o
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CrossKernel.cpp.AVX.cpp:10:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CrossKernel.cpp.AVX.cpp:10:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CrossKernel.cpp.AVX.cpp:10:
In file included from /root/pytorch/aten/src/ATen/cpu/vml.h:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/functional.h:2:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:6:
In file included from /root/pytorch/aten/src/ATen/native/cpu/Loops.h:34:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:6:
In file included from /root/pytorch/aten/src/ATen/native/cpu/Loops.h:34:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/CopyKernel.cpp.AVX.cpp:6:
In file included from /root/pytorch/aten/src/ATen/native/cpu/Loops.h:34:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:24:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:24:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:24:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:30:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:24:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:30:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:24:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:30:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3)>' requested here
cpu_kernel_vec(
 ^
4 warnings generated.
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:24:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:30:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3)>' requested here
cpu_kernel_vec(
 ^
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/TensorCompareKernel.cpp.DEFAULT.cpp.o
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:24:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:21:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:30:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:42:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:30:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:42:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:42:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:42:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:30:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:26:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:42:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:42:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:179:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:179:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:181:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:179:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:179:3)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
In file included from strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp
:9 ~~~ ^ ~:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36:/root/pytorch/aten/src/ATen/native/cpu/Loops.h :189:warning14: : comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]note
: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:179:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:179:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:181: strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);5
: ~~~ ^ ~
note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:179:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:179:3)>' requested here
 cpu_kernel_vec(iter,/root/pytorch/aten/src/ATen/native/cpu/Loops.h
:189 ^:
14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:42:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:41:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:193:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:193:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:195:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:193:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:193:3)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:11:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:193:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:193:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:195:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:193:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:193:3)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:9:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:9:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/Activation.cpp.AVX.cpp:9:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:61:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:61:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:62:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:61:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:61:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:9:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:61:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:61:5)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:62:7: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:61:5), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:61:5)>' requested here
cpu_kernel_vec(iter,
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:6:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_float.h:270:6: warning: unused function 'convert' [-Wunused-function]
void convert(const float* src, float* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:5:
In file included from /root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:7:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256_double.h:262:6: warning: unused function 'convert' [-Wunused-function]
void convert(const double* src, double* dst, int64_t n) {
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX.cpp:5:
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:51:15: warning: unused function 'cast' [-Wunused-function]
Vec256<float> cast<float, double>(const Vec256<double>& src) {
 ^
/root/pytorch/aten/src/ATen/cpu/vec256/vec256.h:56:16: warning: unused function 'cast' [-Wunused-function]
Vec256<double> cast<double, float>(const Vec256<float>& src) {
 ^
15 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp.o
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:49:46: warning: taking the absolute value of unsigned type 'scalar_t' (aka 'unsigned char') has no effect [-Wabsolute-value]
[=](scalar_t a) -> scalar_t { return std::abs(a); },
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:49:46: note: remove the call to 'std::abs' since unsigned values cannot be negative
[=](scalar_t a) -> scalar_t { return std::abs(a); },
 ^~~~~~~~
/root/pytorch/aten/src/ATen/Dispatch.h:229:59: note: expanded from macro 'AT_DISPATCH_ALL_TYPES'
AT_PRIVATE_CASE_TYPE(at::ScalarType::Byte, uint8_t, __VA_ARGS__) \
 ^
/root/pytorch/aten/src/ATen/Dispatch.h:12:12: note: expanded from macro 'AT_PRIVATE_CASE_TYPE'
return __VA_ARGS__(); \
 ^
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/SoftMaxKernel.cpp.DEFAULT.cpp.o
5 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.DEFAULT.cpp.o
20 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/LerpKernel.cpp.DEFAULT.cpp.o
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:31:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:31:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:32:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:31:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:31:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:31:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:31:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:32:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:31:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:31:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:47:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:47:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:47:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:47:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:47:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:47:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:47:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:46:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:75:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:75:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:76:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:75:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:75:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:75:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:75:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:76:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:75:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:75:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:84:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:84:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:85:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:84:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:84:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:84:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:84:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:85:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:84:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:84:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:94:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:94:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:94:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:94:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:94:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:94:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:94:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:93:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:177:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:177:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:178:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:177:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:177:3)>' requested here
cpu_kernel_vec(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:18:
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:138:36: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
strides[arg] = (S > 0 && arg == S) ? 0 : sizeof(scalar_t);
 ~~~ ^ ~
/root/pytorch/aten/src/ATen/native/cpu/Loops.h:189:14: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::vectorized_loop<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:177:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:177:3)>' requested here
return vectorized_loop(data, n, 0, op, vop);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:178:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::cpu_kernel_vec<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:177:3), (lambda at /root/pytorch/build/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp.DEFAULT.cpp:177:3)>' requested here
cpu_kernel_vec(
 ^
23 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/IndexKernel.cpp.DEFAULT.cpp.o
4 warnings generated.
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/GridSamplerKernel.cpp.DEFAULT.cpp.o
[ 63%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/FillKernel.cpp.DEFAULT.cpp.o
4 warnings generated.
[ 64%] Building CXX object caffe2/CMakeFiles/torch.dir/__/aten/src/ATen/native/cpu/DistanceOpsKernel.cpp.DEFAULT.cpp.o
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.DEFAULT.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:78:3: warning: unused variable 'out_ptr' [-Wunused-variable]
VEC_LOOP_HEADER(func_t, data)
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:16:9: note: expanded from macro 'VEC_LOOP_HEADER'
char* out_ptr = data[0];
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:92:3: warning: unused variable 'out_ptr' [-Wunused-variable]
VEC_LOOP_HEADER(func_t, data)
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:16:9: note: expanded from macro 'VEC_LOOP_HEADER'
char* out_ptr = data[0];
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:145:25: warning: comparison of integers of different signs: 'const int' and 'std::size_t' (aka 'unsigned long') [-Wsign-compare]
AT_ASSERT(num_outputs == result_size);
 ~~~~~~~~~~~ ^ ~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:349:54: note: expanded from macro 'AT_ASSERT'
C10_EXPAND_MSVC_WORKAROUND(TORCH_INTERNAL_ASSERT(__VA_ARGS__)); \
 ^~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:202:31: note: expanded from macro 'TORCH_INTERNAL_ASSERT'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/c10/util/Exception.h:146:39: note: expanded from macro 'C10_EXPAND_MSVC_WORKAROUND'
#define C10_EXPAND_MSVC_WORKAROUND(x) x
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:21:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:20:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:21:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:20:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:21:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:20:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:21:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:20:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:21:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:20:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:21:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:20:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:26:30: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
for (size_t d = 0; d < ndim; d++) {
 ~ ^ ~~~~
/root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:21:5: note: in instantiation of function template specialization 'at::native::dim_apply<(lambda at /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:20:3)>' requested here
dim_apply(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/SortingKernel.cpp.DEFAULT.cpp:6:
/root/pytorch/aten/src/ATen/native/SortingUtils.h:27:17: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
if (d != dim) {
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:30:31: warning: comparison of integers of different signs: 'size_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Wsign-compare]
nt = nt.select((d > dim ? 1 : 0), i % sizes[d]);
 ~ ^ ~~~
/root/pytorch/aten/src/ATen/native/SortingUtils.h:43:13: warning: unused function '_reduction_with_indices_allocate_or_resize_output' [-Wunused-function]
static void _reduction_with_indices_allocate_or_resize_output(
 ^
/root/pytorch/aten/src/ATen/native/SortingUtils.h:83:13: warning: unused function '_allocate_or_resize_output_with_indices' [-Wunused-function]
static void _allocate_or_resize_output_with_indices(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.DEFAULT.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:145:25: warning: comparison of integers of different signs: 'const int' and 'std::size_t' (aka 'unsigned long') [-Wsign-compare]
AT_ASSERT(num_outputs == result_size);
 ~~~~~~~~~~~ ^ ~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:349:54: note: expanded from macro 'AT_ASSERT'
C10_EXPAND_MSVC_WORKAROUND(TORCH_INTERNAL_ASSERT(__VA_ARGS__)); \
 ^~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:202:31: note: expanded from macro 'TORCH_INTERNAL_ASSERT'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/c10/util/Exception.h:146:39: note: expanded from macro 'C10_EXPAND_MSVC_WORKAROUND'
#define C10_EXPAND_MSVC_WORKAROUND(x) x
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >::*)(at::native::WelfordData<double, long, double>, double) const>, double, double>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.DEFAULT.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.DEFAULT.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:134:9: warning: comparison of integers of different signs: 'unsigned long' and 'const int' [-Wsign-compare]
if (i < num_outputs) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:144:29: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >::*)(at::native::WelfordData<double, long, double>, double) const>, 0, double, double>' requested here
std::size_t result_size = for_each_in_tuple<traits>(result, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >::*)(at::native::WelfordData<double, long, double>, double) const>, double, double>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.DEFAULT.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.DEFAULT.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:134:9: warning: comparison of integers of different signs: 'unsigned long' and 'const int' [-Wsign-compare]
if (i < num_outputs) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:136:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >::*)(at::native::WelfordData<double, long, double>, double) const>, 1, double, double>' requested here
return for_each_in_tuple<traits, i + 1, tuple_t...>(t, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:144:29: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >::*)(at::native::WelfordData<double, long, double>, double) const>, 0, double, double>' requested here
std::size_t result_size = for_each_in_tuple<traits>(result, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >::*)(at::native::WelfordData<double, long, double>, double) const>, double, double>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.DEFAULT.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<double, double, long, double, std::tuple<double, double> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.DEFAULT.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:145:25: warning: comparison of integers of different signs: 'const int' and 'std::size_t' (aka 'unsigned long') [-Wsign-compare]
AT_ASSERT(num_outputs == result_size);
 ~~~~~~~~~~~ ^ ~~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:349:54: note: expanded from macro 'AT_ASSERT'
C10_EXPAND_MSVC_WORKAROUND(TORCH_INTERNAL_ASSERT(__VA_ARGS__)); \
 ^~~~~~~~~~~
/root/pytorch/c10/util/Exception.h:202:31: note: expanded from macro 'TORCH_INTERNAL_ASSERT'
if (C10_UNLIKELY_OR_CONST(!(cond))) { \
 ^~~~
/root/pytorch/c10/util/Exception.h:165:47: note: expanded from macro 'C10_UNLIKELY_OR_CONST'
#define C10_UNLIKELY_OR_CONST(e) C10_UNLIKELY(e)
 ^
/root/pytorch/c10/macros/Macros.h:140:65: note: expanded from macro 'C10_UNLIKELY'
#define C10_UNLIKELY(expr) (__builtin_expect(static_cast<bool>(expr), 0))
 ^~~~
/root/pytorch/c10/util/Exception.h:146:39: note: expanded from macro 'C10_EXPAND_MSVC_WORKAROUND'
#define C10_EXPAND_MSVC_WORKAROUND(x) x
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >::*)(at::native::WelfordData<double, long, double>, float) const>, float, float>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.DEFAULT.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.DEFAULT.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:134:9: warning: comparison of integers of different signs: 'unsigned long' and 'const int' [-Wsign-compare]
if (i < num_outputs) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:144:29: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >::*)(at::native::WelfordData<double, long, double>, float) const>, 0, float, float>' requested here
std::size_t result_size = for_each_in_tuple<traits>(result, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::set_results<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >::*)(at::native::WelfordData<double, long, double>, float) const>, float, float>' requested here
set_results<r_traits>(ops.project(total_acc), sub_iter, num_outputs);
 ^
/root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.DEFAULT.cpp:39:5: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::binary_kernel_reduce<at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >, at::native::WelfordData<double, long, double> >' requested here
binary_kernel_reduce(
 ^
In file included from /root/pytorch/build/aten/src/ATen/native/cpu/ReduceOpsKernel.cpp.DEFAULT.cpp:10:
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:134:9: warning: comparison of integers of different signs: 'unsigned long' and 'const int' [-Wsign-compare]
if (i < num_outputs) {
 ~ ^ ~~~~~~~~~~~
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:136:12: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >::*)(at::native::WelfordData<double, long, double>, float) const>, 1, float, float>' requested here
return for_each_in_tuple<traits, i + 1, tuple_t...>(t, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:144:29: note: in instantiation of function template specialization 'at::native::(anonymous namespace)::for_each_in_tuple<binary_function_traits<at::native::WelfordData<double, long, double> (at::native::WelfordOps<float, double, long, double, std::tuple<float, float> >::*)(at::native::WelfordData<double, long, double>, float) const>, 0, float, float>' requested here
std::size_t result_size = for_each_in_tuple<traits>(result, iter, num_outputs);
 ^
/root/pytorch/aten/src/ATen/native/cpu/Reduce.h:240:5: note: in instanti
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment