Skip to content

Instantly share code, notes, and snippets.

@vfdev-5
Created November 24, 2020 10:10
Show Gist options
  • Save vfdev-5/911c4b12c1958648b19ebdacae84e798 to your computer and use it in GitHub Desktop.
Save vfdev-5/911c4b12c1958648b19ebdacae84e798 to your computer and use it in GitHub Desktop.
PyTorch C++ dev with xeus-cling in Jupyter

How to setup interactive C++ interpreter with PyTorch C++ library in Jupyter using xeus-cling

Requirements

  • installed Jupyter notebook
  • installed conda, e.g. /opt/conda
  • built PyTorch
    • from source
    • libtorch
    • installed via conda (but can have an issue with _GLIBCXX_USE_CXX11_ABI)

There can be an issue if pytorch is built with _GLIBCXX_USE_CXX11_ABI=0, but xeus-cling assumes _GLIBCXX_USE_CXX11_ABI=1. Please, make sure that

python -c "import torch; print(torch._C._GLIBCXX_USE_CXX11_ABI)"
> True

Otherwise, there will be linker errors with methods using std::string.

Install xeus-cling into conda env

(base) $ conda create -n cling
(base) $ conda activate cling
(cling) $ conda install xeus-cling -c conda-forge
(cling) $ conda deactivate 
(base) $ jupyter kernelspec install /opt/conda/envs/cling/share/jupyter/kernels/xcpp11 --sys-prefix
(base) $ jupyter kernelspec install /opt/conda/envs/cling/share/jupyter/kernels/xcpp14 --sys-prefix
(base) $ jupyter kernelspec install /opt/conda/envs/cling/share/jupyter/kernels/xcpp17 --sys-prefix

Refresh Jupyter notebook page

Jupyter C++ notebook

Let's assume that PyTorch is built from source:

!ls /workspace/pytorch/torch/lib
> libtorch.so
> ...

Add the following cells to your C++ notebook

// If you want to add include path
#pragma cling add_include_path("/workspace/pytorch/torch/include")
#pragma cling add_include_path("/workspace/pytorch/torch/include/torch/csrc/api/include")
// If you want to add library path
#pragma cling add_library_path("/workspace/pytorch/torch/lib")
// If you want to load library
#pragma cling load("libtorch")
#pragma cling load("libtorch_cpu")
#pragma cling load("libc10")
// #pragma cling load("libtorch_cuda")
// #pragma cling load("libc10_cuda")

If everything is set up correctly, the following code should work

#include <iostream>
#include <ATen/ATen.h>

auto p = at::CPU(at::kFloat);
std::cout << p << "\n";
auto t = at::ones({3, 4}, p);
std::cout << t << "\n";

References

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment