PyTorch can be installed via different channels: conda
, pip
, docker
, source code
...
By default, mkl and mkl-dnn are enabled; But this might not always be true, so it is still useful to learn how to check this by yourself:
### check where your torch is installed
python -c 'import torch; print(torch.__path__)'
On my machine, it points to the conda env pytorch-cuda
which i created specifically for cuda runs...
['/home/mingfeim/anaconda3/envs/pytorch-cuda/lib/python3.7/site-packages/torch']
Next,
cd /home/mingfeim/anaconda3/envs/pytorch-cuda/lib/python3.7/site-packages/torch
cd lib
ldd libtorch.so
This will give all the .so
that PyTorch compiled against...
linux-vdso.so.1 => (0x00007ffe5ef06000)
libgomp.so.1 => /home/mingfeim/anaconda3/envs/pytorch-cuda/lib/python3.7/site-packages/torch/lib/./../../../../libgomp.so.1 (0x00007f0216544000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f0216312000)
libnvToolsExt.so.1 => /home/mingfeim/anaconda3/envs/pytorch-cuda/lib/python3.7/site-packages/torch/lib/./../../../../libnvToolsExt.so.1 (0x00007f0216108000)
libcudart.so.10.0 => /home/mingfeim/anaconda3/envs/pytorch-cuda/lib/python3.7/site-packages/torch/lib/./../../../../libcudart.so.10.0 (0x00007f0215e8b000)
libcaffe2.so => /home/mingfeim/anaconda3/envs/pytorch-cuda/lib/python3.7/site-packages/torch/lib/./libcaffe2.so (0x00007f0212c54000)
libcaffe2_gpu.so => /home/mingfeim/anaconda3/envs/pytorch-cuda/lib/python3.7/site-packages/torch/lib/./libcaffe2_gpu.so (0x00007f01e71c7000)
libc10_cuda.so => /home/mingfeim/anaconda3/envs/pytorch-cuda/lib/python3.7/site-packages/torch/lib/./libc10_cuda.so (0x00007f01e6fa2000)
libc10.so => /home/mingfeim/anaconda3/envs/pytorch-cuda/lib/python3.7/site-packages/torch/lib/./libc10.so (0x00007f01e6d5e000)
libm.so.6 => /lib64/libm.so.6 (0x00007f01e6a5c000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f01e6858000)
libstdc++.so.6 => /home/mingfeim/anaconda3/envs/pytorch-cuda/lib/python3.7/site-packages/torch/lib/./../../../../libstdc++.so.6 (0x00007f01e6716000)
libgcc_s.so.1 => /home/mingfeim/anaconda3/envs/pytorch-cuda/lib/python3.7/site-packages/torch/lib/./../../../../libgcc_s.so.1 (0x00007f01e6702000)
libc.so.6 => /lib64/libc.so.6 (0x00007f01e633f000)
/lib64/ld-linux-x86-64.so.2 (0x000056504e07a000)
librt.so.1 => /lib64/librt.so.1 (0x00007f01e6136000)
libmkl_intel_lp64.so => /home/mingfeim/anaconda3/envs/pytorch-cuda/lib/python3.7/site-packages/torch/lib/./../../../../libmkl_intel_lp64.so (0x00007f01e55e8000)
libmkl_gnu_thread.so => /home/mingfeim/anaconda3/envs/pytorch-cuda/lib/python3.7/site-packages/torch/lib/./../../../../libmkl_gnu_thread.so (0x00007f01e3d93000)
libmkl_core.so => /home/mingfeim/anaconda3/envs/pytorch-cuda/lib/python3.7/site-packages/torch/lib/./../../../../libmkl_core.so (0x00007f01dfc07000)
libcusparse.so.10.0 => /home/mingfeim/anaconda3/envs/pytorch-cuda/lib/python3.7/site-packages/torch/lib/./../../../../libcusparse.so.10.0 (0x00007f01dc198000)
libcurand.so.10.0 => /home/mingfeim/anaconda3/envs/pytorch-cuda/lib/python3.7/site-packages/torch/lib/./../../../../libcurand.so.10.0 (0x00007f01d8030000)
libcufft.so.10.0 => /home/mingfeim/anaconda3/envs/pytorch-cuda/lib/python3.7/site-packages/torch/lib/./../../../../libcufft.so.10.0 (0x00007f01d1b79000)
libcublas.so.10.0 => /home/mingfeim/anaconda3/envs/pytorch-cuda/lib/python3.7/site-packages/torch/lib/./../../../../libcublas.so.10.0 (0x00007f01cd5e0000)
In case you see libmkl_intel_lp64.so
, libmkl_gnu_thread.so
, libmkl_core.so
, your PyTorch has mkl; otherwise not.
Also this is the method to check which mkl is being used in case you have multiple versions installed on your machine, which is particularly useful for intel employees...
python -c 'import torch; a = torch.randn(10); print(a.to_mkldnn().layout)'
On my machine, this will print the tensor's layout which is _mkldnn
, which indicates pytorch is compiled against mkl-dnn
torch._mkldnn
In case you have no mkl-dnn enabled, you will receive a RuntimeError
from to_mkldnn()
...
PyTorch is now shipped with gomp
by default...In case you want to use iomp
, follow use-intel-openmp-library.
Hello mingfeima,
Your material is good for me to getting started to using optimized PyTorch on Intel CPU. I got an interesting thing on this topic:
How to check whether mkl is enabled?
you wrote.According to https://software.intel.com/en-us/articles/getting-started-with-intel-optimization-of-pytorch,
So I installed PyTorch v1.4.0 with CUDA 10.1 support, and print the configuration by executing
$ python -c "import torch; print(torch.__config__.show())"
Yap, as expected, it is built with Intel MKL and MKL-DNN.
And then I use your instruction to check the used
.so
files forlibtorch.so
, found that there is not anylibmkl*
!The 2nd check is using
to_mkldnn()
to verify.$ python -c 'import torch; a = torch.randn(10); print(a.to_mkldnn().layout)' torch._mkldnn
Sure it is not a major issue since in normal case a customer will not use CUDA with MKL-DNN.
However, isn't is interesting?
Anyway, thanks for your material. It is very practical to guide me.
B.Rds,
Augustus