PyTorch now has an official TVM-based backend, torch_tvm. Usage is simple:
import torch_tvm
torch_tvm.enable()
That's it! PyTorch will then attempt to convert all operators it can to known Relay operators during its JIT compilation process.
# split | |
for i in range(107): | |
a[i] = b[i] + c[i] | |
for i0 in range(10): | |
for i1 in range(10): | |
a[i0 * 10 + i1] = b[i0 * 10 + i1] + c[i0 * 10 + i1] | |
for i0 in range(7): | |
a[100 + i0] = b[100 + i0] + c[100 + i0] |
template<bool...> struct bool_pack; | |
template<bool... bs> | |
using all_true = std::is_same<bool_pack<bs..., true>, bool_pack<true, bs...>>; | |
template<class R, class... Ts> | |
using are_all_constructible = all_true<std::is_constructible<R, Ts>::value...>; | |
template<typename... Ts> | |
struct ivalue_constructible_tuple { | |
constexpr static bool value = are_all_constructible<c10::IValue, Ts...>::value; |
>>> import this | |
The Zen of Python, by Tim Peters | |
Beautiful is better than ugly. | |
Explicit is better than implicit. | |
Simple is better than complex. | |
Complex is better than complicated. | |
Flat is better than nested. | |
Sparse is better than dense. | |
Readability counts. |
/* | |
How to run: | |
PT_DIR=$(python -c 'import os, torch; print(os.path.dirname(os.path.realpath(torch.__file__)))') | |
g++ -O3 test.cc -o test -I$PT_DIR/include -L$PT_DIR/lib -ltorch -lc10 | |
for((i=1;i<=100000;i*=2)); do LD_LIBRARY_PATH="$PT_DIR/lib:$LD_LIBRARY_PATH" ./test $i; done | |
*/ | |
#include <ATen/core/Dict.h> | |
#include <chrono> |
Size 1 | |
DictPtr : 335 usec | |
std::unordered_map : 442 usec | |
std::unordered_map + std::move : 381 usec | |
Size 2 | |
DictPtr : 347 usec | |
std::unordered_map : 553 usec | |
std::unordered_map + std::move : 459 usec | |
Size 4 | |
DictPtr : 553 usec |
PyTorch now has an official TVM-based backend, torch_tvm. Usage is simple:
import torch_tvm
torch_tvm.enable()
That's it! PyTorch will then attempt to convert all operators it can to known Relay operators during its JIT compilation process.
PyTorch now has an official TVM-based backend, [torch_tvm](https://github.com/pytorch/tvm). Usage is simple: | |
``` | |
import torch_tvm | |
torch_tvm.enable() | |
``` | |
That's it! PyTorch will then attempt to convert all operators it can to known Relay operators during its JIT compilation process. | |
### Background |
diff --git a/include/tvm/runtime/c_runtime_api.h b/include/tvm/runtime/c_runtime_api.h | |
index 1a1a8da6..2e15b179 100644 | |
--- a/include/tvm/runtime/c_runtime_api.h | |
+++ b/include/tvm/runtime/c_runtime_api.h | |
@@ -85,6 +85,7 @@ typedef enum { | |
kStr = 11U, | |
kBytes = 12U, | |
kNDArrayContainer = 13U, | |
+ kManagedArrayHandle = 14U, | |
// Extension codes for other frameworks to integrate TVM PackedFunc. |
import tvm | |
from tvm import relay | |
import torch | |
import torch.nn.functional as F | |
import inspect | |
import ast | |
import numpy as np | |
_parsed_functions = dict() |