-
-
Save dusty-nv/ef2b372301c00c0a9d3203e42fd83426 to your computer and use it in GitHub Desktop.
#!/bin/bash | |
# | |
# EDIT: this script is outdated, please see https://forums.developer.nvidia.com/t/pytorch-for-jetson-nano-version-1-6-0-now-available | |
# | |
sudo apt-get install python-pip | |
# upgrade pip | |
pip install -U pip | |
pip --version | |
# pip 9.0.1 from /home/ubuntu/.local/lib/python2.7/site-packages (python 2.7) | |
# clone pyTorch repo | |
git clone http://github.com/pytorch/pytorch | |
cd pytorch | |
git submodule update --init | |
# install prereqs | |
sudo pip install -U setuptools | |
sudo pip install -r requirements.txt | |
# Develop Mode: | |
python setup.py build_deps | |
sudo python setup.py develop | |
# Install Mode: (substitute for Develop Mode commands) | |
#sudo python setup.py install | |
# Verify CUDA (from python interactive terminal) | |
# import torch | |
# print(torch.__version__) | |
# print(torch.cuda.is_available()) | |
# a = torch.cuda.FloatTensor(2) | |
# print(a) | |
# b = torch.randn(2).cuda() | |
# print(b) | |
# c = a + b | |
# print(c) |
@YogeshShitole,
By trying a couple of codes below, I was able to fix the issue with cmake.
sudo add-apt-repository ppa:george-edison55/cmake-3.x
sudo apt-get update
Has anyone solved the following issue:
`/home/ubuntu/pytorch/aten/src/ATen/cudnn/cudnn-wrapper.h:10:2: error: #error "CuDNN version not supported"
#error "CuDNN version not supported"
^
CMake Error at ATen_generated_NativeFunctionsCuda.cu.o.cmake:207 (message):
Error generating
/home/ubuntu/pytorch/torch/lib/build/aten/src/ATen/CMakeFiles/ATen.dir/native/cuda/./ATen_generated_NativeFunctionsCuda.cu.o
src/ATen/CMakeFiles/ATen.dir/build.make:71019: recipe for target 'src/ATen/CMakeFiles/ATen.dir/native/cuda/ATen_generated_NativeFunctionsCuda.cu.o' failed
make[2]: *** [src/ATen/CMakeFiles/ATen.dir/native/cuda/ATen_generated_NativeFunctionsCuda.cu.o] Error 1
CMakeFiles/Makefile2:226: recipe for target 'src/ATen/CMakeFiles/ATen.dir/all' failed
make[1]: *** [src/ATen/CMakeFiles/ATen.dir/all] Error 2
Makefile:127: recipe for target 'all' failed
make: *** [all] Error 2`
Anyone else getting this error at around the end of build_deps? /usr/local/cuda/lib64/libcudnn.so: error adding symbols: File in wrong format
Hello I have tried to install on Jetson TX1 but it stops at build, and give segmentation error, I have tracked the error, it is due to building exceeds ram any suggestions ?
@Hunterhal Hmm for me the build is successful on TX2 but fails on TX1 as well. Seems to be an memory error, as gcc exits with error code 4. There is a tutorial on jetsonhacks.com telling how to set up a swap file. I hope that helps.. I'll try it myself next week and post updates here.
@Hunterhal @derAtomkeks , it is due to TX1 having 4GB memory (vs TX2 8GB), SWAP is needed. Or you can build whl on TX2 and install it to TX1 running the same JetPack.
When building on TX2 with cudnn 6 cuda 8, gcc5.4 got folllowing on the ATen building phase:
[ 47%] Building CXX object src/ATen/CMakeFiles/ATen.dir/__/TH/THVector.cpp.o
[ 48%] Building CXX object src/ATen/CMakeFiles/ATen.dir/__/THNN/init.cpp.o
[ 48%] Building CXX object src/ATen/CMakeFiles/ATen.dir/__/THS/THSTensor.cpp.o
[ 48%] Building CXX object src/ATen/CMakeFiles/ATen.dir/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp.o
c++: error: unrecognized command line option ‘-mavx2’
src/ATen/CMakeFiles/ATen.dir/build.make:81805: recipe for target 'src/ATen/CMakeFiles/ATen.dir/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp.o' failed
make[2]: *** [src/ATen/CMakeFiles/ATen.dir/native/cpu/ReduceOpsKernel.cpp.AVX2.cpp.o] Error 1
although gcc5.4 docs says it supports mavx2
UPD: I removed -mavx and -mavx2 options from build and it succeeded.
@thatwist I am facing the exact same problem--can you show me how you removed it? Thanks!
EDIT: I found it--to those that are wondering, go to pytorch/aten/src/ATen/CMakeLists.txt, change the line "LIST(APPEND CPU_CAPABILITY_FLAGS "-O3" "-O3 -mavx" "-O3 -mavx2")" to "LIST(APPEND CPU_CAPABILITY_FLAGS "-O3" "-O3" "-O3")"
@Kowasaki I just used find and sed to remove all -mavx2 and -mavx strings - something like
grep -rl "\-mavx2" * | xargs sed -i "s/-mavx2//g"
and then
grep -rl "\-mavx" * | xargs sed -i "s/-mavx//g"
You guys may be interested in this script from jetson-reinforcement
repo which remains updated:
https://github.com/dusty-nv/jetson-reinforcement/blob/master/CMakePreBuild.sh
It contains other stuff than just pyTorch but the pyTorch install works on TX2 with JetPack 3.2.
As @Hunterhal and @derAtomkeks, I ran into memory issues on TX1 during sudo python setup.py develop
with
aarch64-linux-gnu-gcc: internal compiler error: Killed (program cc1plus)
I worked around this by pausing the parallel compiler processes with
for pid in $(pidof cc1plus); do echo $pid; sudo kill -sigstop $pid; done
Then resume two of them with sudo kill -sigcont <printed-pid>
directly and the other two later, when the rest of the compilations are done.
After compilation I got the message
WARNING: 'develop' is not building C++ code incrementally
because ninja is not installed. Run this to enable it:
pip install ninja
I tried that, but that failed with some other error. But maybe that would have allowed to just trigger the compilation repeatedly?
I've used python3 setup.py bdist_wheel
and got the same cc1plus
error. I solved this by allocating a 4GB swap file which allowed the build to complete.
I am trying to install pytorch from source on Odroid XU4 and having the following error. The installation is going up to 97% and then breaking.
Can anyone tell me how to fix this?
[ 97%] Linking CXX executable ../../bin/test_jit
[ 97%] Linking CXX executable ../../bin/test_api
../../lib/libtorch.so.1: undefined reference to dlclose' ../../lib/libtorch.so.1: undefined reference to
dlsym'
../../lib/libtorch.so.1: undefined reference to dlopen' ../../lib/libtorch.so.1: undefined reference to
dlerror'
collect2: error: ld returned 1 exit status
caffe2/torch/CMakeFiles/test_jit.dir/build.make:97: recipe for target 'bin/test_jit' failed
make[2]: *** [bin/test_jit] Error 1
CMakeFiles/Makefile2:2493: recipe for target 'caffe2/torch/CMakeFiles/test_jit.dir/all' failed
make[1]: *** [caffe2/torch/CMakeFiles/test_jit.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
../../lib/libtorch.so.1: undefined reference to dlclose' ../../lib/libtorch.so.1: undefined reference to
dlsym'
../../lib/libtorch.so.1: undefined reference to dlopen' ../../lib/libtorch.so.1: undefined reference to
dlerror'
collect2: error: ld returned 1 exit status
caffe2/torch/CMakeFiles/test_api.dir/build.make:513: recipe for target 'bin/test_api' failed
make[2]: *** [bin/test_api] Error 1
CMakeFiles/Makefile2:2533: recipe for target 'caffe2/torch/CMakeFiles/test_api.dir/all' failed
make[1]: *** [caffe2/torch/CMakeFiles/test_api.dir/all] Error 2
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2
Failed to run 'bash tools/build_pytorch_libs.sh --use-nnpack caffe2 nanopb libshm gloo THD'
odroid@odroid:~/pytorch$
Worked like a charm on Jetson TX2 dev kit with Ubuntu 16.04.
Thanks for providing this script-- outstanding!
Thanks !
I am trying this is TX2 and got into below error. Has anyone seen this?
running build_ext
-- NumPy not found
-- Detected cuDNN at /usr/lib/aarch64-linux-gnu/libcudnn.so.7, /usr/include/
-- Not using MIOpen
-- Detected CUDA at /usr/local/cuda
-- Not using MKLDNN
-- Building NCCL library
-- Building with THD distributed package
-- Building with c10d distributed package
Traceback (most recent call last):
File "setup.py", line 1232, in
rel_site_packages + '/caffe2/**/*.py'
File "/usr/lib/python2.7/distutils/core.py", line 151, in setup
dist.run_commands()
File "/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "setup.py", line 523, in run
setuptools.command.develop.develop.run(self)
File "/usr/lib/python2.7/dist-packages/setuptools/command/develop.py", line 34, in run
self.install_for_development()
File "/usr/lib/python2.7/dist-packages/setuptools/command/develop.py", line 119, in install_for_development
self.run_command('build_ext')
File "/usr/lib/python2.7/distutils/cmd.py", line 326, in run_command
self.distribution.run_command(command)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "setup.py", line 619, in run
generate_code(ninja_global)
File "/home/nvidia/pytorch/tools/setup_helpers/generate_code.py", line 84, in generate_code
from tools.autograd.gen_autograd import gen_autograd
File "/home/nvidia/pytorch/tools/autograd/gen_autograd.py", line 16, in
from .utils import YamlLoader, split_name_params
File "/home/nvidia/pytorch/tools/autograd/utils.py", line 14, in
from tools.shared.module_loader import import_module
File "/home/nvidia/pytorch/tools/shared/init.py", line 2, in
from .cwrap_common import set_declaration_defaults,
ImportError: No module named cwrap_common
Hi there,
I am trying this on Jetson TX2 and everything completes, but then when I run the test commands I get this:
sudo python3
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
import torch
print(torch.version)
1.1.0a0+929258a
print(torch.cuda.is_available())
True
a = torch.cuda.FloatTensor(2)
Traceback (most recent call last):
File "", line 1, in
RuntimeError: CUDA error: unknown error
I am running with the newest edition of L4T just released last month:
L4T 32.1
Ubuntu 18.04
Cuda 10.0 (V10.0.166)
Python 3.6.7
I am wondering if this may be a versioning issue. I did some searching around for that error but, as it lacks any real information (unknown error), the results were unhelpful. It did seem that some others (linux wide, not Tegra specifically) experienced this issue after upgrading from cuda 8 to cuda 9, and had to recompile PyTorch with cuda 9. I noticed others above noting being on cuda 9. So I wonder if the issue is I am on cuda 10, but then it was compiled using the version of cuda on my Jetson.
Is it supposed to kill so many processes? I do not have swap on my Jetson TX2. should I add swap? Also the install is failing with
"Error: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-req-build-2edf12aa/" error. I have upgraded setuptools, installed ezinstall but it is still giving this error.
Hi,
I am trying to install pytorch v1 on TX2 with Jetpack 3.3 and I m getting the following error after running the command $ python setup.py install
:
Makefile:68: recipe for target '/home/nvidia/Desktop/pytorch/build/nccl/obj/collectives/device/devlink.o' failed make[5]: *** [/home/nvidia/Desktop/pytorch/build/nccl/obj/collectives/device/devlink.o] Error 255 Makefile:44: recipe for target '/home/nvidia/Desktop/pytorch/build/nccl/obj/collectives/device/colldevice.a' failed make[4]: *** [/home/nvidia/Desktop/pytorch/build/nccl/obj/collectives/device/colldevice.a] Error 2 Makefile:25: recipe for target 'src.build' failed make[3]: *** [src.build] Error 2 CMakeFiles/nccl_external.dir/build.make:110: recipe for target 'nccl_external-prefix/src/nccl_external-stamp/nccl_external-build' failed make[2]: *** [nccl_external-prefix/src/nccl_external-stamp/nccl_external-build] Error 2 CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/nccl_external.dir/all' failed make[1]: *** [CMakeFiles/nccl_external.dir/all] Error 2 Makefile:138: recipe for target 'all' failed make: *** [all] Error 2 Traceback (most recent call last): File "setup.py", line 749, in <module> build_deps() File "setup.py", line 323, in build_deps cmake=cmake) File "/home/nvidia/Desktop/pytorch/tools/build_pytorch_libs.py", line 64, in build_caffe2 cmake.build(my_env) File "/home/nvidia/Desktop/pytorch/tools/setup_helpers/cmake.py", line 345, in build self.run(build_args, my_env) File "/home/nvidia/Desktop/pytorch/tools/setup_helpers/cmake.py", line 107, in run check_call(command, cwd=self.build_dir, env=env) File "/usr/lib/python3.5/subprocess.py", line 581, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '6']' returned non-zero exit status 2
I also tried to run python setup.py build_deps
but I am getting error: invalid command 'build_deps'
Do you know how could I solve it?
Thanks a lot!
Hi,
Sara here,
On Jetson TX2, i followed the steps listed in the script, when i try to build_deps, getting an error. pasted below. Inputs will be good, thanks.
python setup.py build_deps
Building wheel torch-1.2.0a0+ec57d92
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'build_deps'
Hi,
I am trying to install pytorch v1 on TX2 with Jetpack 3.3 and I m getting the following error after running the command$ python setup.py install
:
....
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--target', 'install', '--config', 'Release', '--', '-j', '6']' returned non-zero exit status 2
I also tried to runpython setup.py build_deps
but I am gettingerror: invalid command 'build_deps'
Do you know how could I solve it?
Thanks a lot!
I found the following comment in setup.py and it solved my issue (i had the same)
It is no longer necessary to use the 'build' or 'rebuild' targets
To install: $ python setup.py install To develop locally: $ python setup.py develop To force cmake to re-generate native build files (off by default): $ python setup.py develop --cmake
Hey @dusty-nv , it seems that the latest release of NCCL 2.6.4.1 recognizes ARM CPUs. I'm currently attempting to install it to my Jetson TX2, because I have been wanting this for some time. However, I must warn: some scripts from the master branch of nccl git are commited with messages from previous releases, which is a yellow flag. If I do get it, I intend to release all my configuration and installation scripts for the community. I'll let you know.
Have a good one.
git submodule update --init --recursive
Hi Everyone, I am having a similar issue and need some assistance. I am trying to install pytorch on the odroid board xu4 but I get the following error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’. Below is the error message directly from the terminal.
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x8c4-minmax-fp32-neondot.c: In function ‘xnn_qs8_gemm_minmax_fp32_ukernel_1x8c4__neondot’:
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x8c4-minmax-fp32-neondot.c:91:17: error: redefinition of ‘vproduct0x0123’
91 | float32x4_t vproduct0x0123 = vmulq_f32(vproduct0x0123, vscale);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x8c4-minmax-fp32-neondot.c:88:17: note: previous definition of ‘vproduct0x0123’ was here
88 | float32x4_t vproduct0x0123 = vcvtq_f32_s32(vacc0x0123);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x8c4-minmax-fp32-neondot.c:92:17: error: redefinition of ‘vproduct0x4567’
92 | float32x4_t vproduct0x4567 = vmulq_f32(vproduct0x4567, vscale);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x8c4-minmax-fp32-neondot.c:89:17: note: previous definition of ‘vproduct0x4567’ was here
89 | float32x4_t vproduct0x4567 = vcvtq_f32_s32(vacc0x4567);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x8c4-minmax-fp32-neondot.c:94:18: warning: implicit declaration of function ‘vcvtnq_s32_f32’; did you mean ‘vcvtq_s32_f32’? [-Wimplicit-function-declaration]
94 | vacc0x0123 = vcvtnq_s32_f32(vproduct0x0123);
| ^~~~~~~~~~~~~~
| vcvtq_s32_f32
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x8c4-minmax-fp32-neondot.c:94:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x8c4-minmax-fp32-neondot.c:95:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
95 | vacc0x4567 = vcvtnq_s32_f32(vproduct0x4567);
| ^~~~~~~~~~~~~~
[ 60%] Building C object confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c.o
make[2]: *** [confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/build.make:15598: confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/qs8-gemm/gen/1x8c4-minmax-fp32-neondot.c.o] Error 1
make[2]: *** Waiting for unfinished jobs....
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x16c4-minmax-fp32-neondot.c: In function ‘xnn_qs8_gemm_minmax_fp32_ukernel_1x16c4__neondot’:
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x16c4-minmax-fp32-neondot.c:112:18: warning: implicit declaration of function ‘vcvtnq_s32_f32’; did you mean ‘vcvtq_s32_f32’? [-Wimplicit-function-declaration]
112 | vacc0x0123 = vcvtnq_s32_f32(vproduct0x0123);
| ^~~~~~~~~~~~~~
| vcvtq_s32_f32
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x16c4-minmax-fp32-neondot.c:112:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x16c4-minmax-fp32-neondot.c:113:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
113 | vacc0x4567 = vcvtnq_s32_f32(vproduct0x4567);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x16c4-minmax-fp32-neondot.c:114:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
114 | vacc0x89AB = vcvtnq_s32_f32(vproduct0x89AB);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/1x16c4-minmax-fp32-neondot.c:115:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
115 | vacc0xCDEF = vcvtnq_s32_f32(vproduct0xCDEF);
| ^~~~~~~~~~~~~~
make[2]: *** [confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/build.make:15624: confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/qs8-gemm/gen/1x16c4-minmax-fp32-neondot.c.o] Error 1
/tmp/cc3aAMfl.s: Assembler messages:
/tmp/cc3aAMfl.s:73: Error: selected processor does not support vsdot.s8 q9,q10,d7[0]' in ARM mode /tmp/cc3aAMfl.s:76: Error: selected processor does not support
vsdot.s8 q9,q10,d7[1]' in ARM mode
/tmp/cc3aAMfl.s:78: Error: selected processor does not support vsdot.s8 q8,q10,d7[0]' in ARM mode /tmp/cc3aAMfl.s:80: Error: selected processor does not support
vsdot.s8 q8,q10,d7[1]' in ARM mode
/tmp/cc3aAMfl.s:152: Error: selected processor does not support vsdot.s8 q9,q10,d7[0]' in ARM mode /tmp/cc3aAMfl.s:154: Error: selected processor does not support
vsdot.s8 q8,q10,d7[0]' in ARM mode
make[2]: *** [confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/build.make:15611: confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/qs8-gemm/gen/1x8c4-minmax-gemmlowp-neondot.c.o] Error 1
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c: In function ‘xnn_qs8_gemm_minmax_fp32_ukernel_4x8c4__neondot’:
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c:154:18: warning: implicit declaration of function ‘vcvtnq_s32_f32’; did you mean ‘vcvtq_s32_f32’? [-Wimplicit-function-declaration]
154 | vacc0x0123 = vcvtnq_s32_f32(vproduct0x0123);
| ^~~~~~~~~~~~~~
| vcvtq_s32_f32
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c:154:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c:155:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
155 | vacc0x4567 = vcvtnq_s32_f32(vproduct0x4567);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c:156:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
156 | vacc1x0123 = vcvtnq_s32_f32(vproduct1x0123);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c:157:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
157 | vacc1x4567 = vcvtnq_s32_f32(vproduct1x4567);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c:158:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
158 | vacc2x0123 = vcvtnq_s32_f32(vproduct2x0123);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c:159:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
159 | vacc2x4567 = vcvtnq_s32_f32(vproduct2x4567);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c:160:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
160 | vacc3x0123 = vcvtnq_s32_f32(vproduct3x0123);
| ^~~~~~~~~~~~~~
/home/odroid/pytorch_install/pytorch/third_party/XNNPACK/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c:161:18: error: incompatible types when assigning to type ‘int32x4_t’ from type ‘int’
161 | vacc3x4567 = vcvtnq_s32_f32(vproduct3x4567);
| ^~~~~~~~~~~~~~
make[2]: *** [confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/build.make:15650: confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/qs8-gemm/gen/4x8c4-minmax-fp32-neondot.c.o] Error 1
/tmp/ccB1jg1m.s: Assembler messages:
/tmp/ccB1jg1m.s:80: Error: selected processor does not support vsdot.s8 q12,q9,d7[0]' in ARM mode /tmp/ccB1jg1m.s:83: Error: selected processor does not support
vsdot.s8 q12,q9,d7[1]' in ARM mode
/tmp/ccB1jg1m.s:86: Error: selected processor does not support vsdot.s8 q11,q9,d7[0]' in ARM mode /tmp/ccB1jg1m.s:89: Error: selected processor does not support
vsdot.s8 q11,q9,d7[1]' in ARM mode
/tmp/ccB1jg1m.s:92: Error: selected processor does not support vsdot.s8 q10,q9,d7[0]' in ARM mode /tmp/ccB1jg1m.s:94: Error: selected processor does not support
vsdot.s8 q10,q9,d7[1]' in ARM mode
/tmp/ccB1jg1m.s:96: Error: selected processor does not support vsdot.s8 q8,q9,d7[0]' in ARM mode /tmp/ccB1jg1m.s:98: Error: selected processor does not support
vsdot.s8 q8,q9,d7[1]' in ARM mode
/tmp/ccB1jg1m.s:196: Error: selected processor does not support vsdot.s8 q12,q9,d7[0]' in ARM mode /tmp/ccB1jg1m.s:198: Error: selected processor does not support
vsdot.s8 q10,q9,d7[0]' in ARM mode
/tmp/ccB1jg1m.s:200: Error: selected processor does not support vsdot.s8 q11,q9,d7[0]' in ARM mode /tmp/ccB1jg1m.s:202: Error: selected processor does not support
vsdot.s8 q8,q9,d7[0]' in ARM mode
make[2]: *** [confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/build.make:15637: confu-deps/XNNPACK/CMakeFiles/XNNPACK.dir/src/qs8-gemm/gen/1x16c4-minmax-gemmlowp-neondot.c.o] Error 1
make[1]: *** [CMakeFil
@YogeshShitole,
There was an error with line#29: python setup.py build_deps within "pytorch_jetson_install.sh" which is responsible for your error message.