๐
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
module attributes {torch.debug_module_name = "GraphModule"} { | |
func private @__torch__.torch.fx.graph_module.___torch_mangle_2.GraphModule.forward(%arg0: !torch.nn.Module<"__torch__.torch.fx.graph_module.___torch_mangle_2.GraphModule">, %arg1: !torch.tensor {torch.type_bound = !torch.vtensor<[768],f32>}, %arg2: !torch.tensor {torch.type_bound = !torch.vtensor<[768],f32>}, %arg3: !torch.tensor {torch.type_bound = !torch.vtensor<[768],f32>}, %arg4: !torch.tensor {torch.type_bound = !torch.vtensor<[768],f32>}, %arg5: !torch.tensor {torch.type_bound = !torch.vtensor<[768],f32>}, %arg6: !torch.tensor {torch.type_bound = !torch.vtensor<[768],f32>}, %arg7: !torch.tensor {torch.type_bound = !torch.vtensor<[768],f32>}, %arg8: !torch.tensor {torch.type_bound = !torch.vtensor<[768],f32>}, %arg9: !torch.tensor {torch.type_bound = !torch.vtensor<[768],f32>}, %arg10: !torch.tensor {torch.type_bound = !torch.vtensor<[768],f32>}, %arg11: !torch.tensor {torch.type_bound = !torch.vtensor<[768],f32>}, %arg12: !torch.tensor {to |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
module attributes {torch.debug_module_name = "GraphModule"} { | |
func @forward(%arg0: !torch.vtensor<[768],f32>, %arg1: !torch.vtensor<[768],f32>, %arg2: !torch.vtensor<[768],f32>, %arg3: !torch.vtensor<[768],f32>, %arg4: !torch.vtensor<[768],f32>, %arg5: !torch.vtensor<[768],f32>, %arg6: !torch.vtensor<[768],f32>, %arg7: !torch.vtensor<[768],f32>, %arg8: !torch.vtensor<[768],f32>, %arg9: !torch.vtensor<[768],f32>, %arg10: !torch.vtensor<[768],f32>, %arg11: !torch.vtensor<[768],f32>, %arg12: !torch.vtensor<[768],f32>, %arg13: !torch.vtensor<[768],f32>, %arg14: !torch.vtensor<[768],f32>, %arg15: !torch.vtensor<[768],f32>, %arg16: !torch.vtensor<[768],f32>, %arg17: !torch.vtensor<[768],f32>, %arg18: !torch.vtensor<[768],f32>, %arg19: !torch.vtensor<[768],f32>, %arg20: !torch.vtensor<[768],f32>, %arg21: !torch.vtensor<[768],f32>, %arg22: !torch.vtensor<[768],f32>, %arg23: !torch.vtensor<[768],f32>, %arg24: !torch.vtensor<[768],f32>, %arg25: !torch.vtensor<[768],f32>, %arg26: !torch.vtensor<[768],f32>, %arg27: !to |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from iree import runtime as ireert | |
from iree.compiler import tf as tfc | |
import sys | |
from absl import app | |
import numpy as np | |
import os | |
import tempfile | |
import tensorflow as tf |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#map0 = affine_map<(d0, d1, d2) -> (d0, d1, d2)> | |
#map1 = affine_map<(d0, d1, d2) -> (d0, d1)> | |
#map2 = affine_map<(d0, d1) -> (d0, d1)> | |
#map3 = affine_map<(d0, d1) -> ()> | |
#map4 = affine_map<(d0, d1) -> (d0)> | |
#map5 = affine_map<(d0, d1) -> (d0, 0)> | |
#map6 = affine_map<(d0, d1) -> (d1, d0)> | |
#map7 = affine_map<(d0, d1) -> (0, d1)> | |
#map8 = affine_map<(d0, d1, d2) -> (d0, d1, 0)> | |
#map9 = affine_map<(d0, d1, d2) -> (d2)> |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
/home/prashant/dSHARK/shark.venv/lib/python3.9/site-packages/bert_pytorch/model/attention/single.py:16: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! | |
/ math.sqrt(query.size(-1)) | |
/home/prashant/dSHARK/shark.venv/lib/python3.9/site-packages/torch/jit/_trace.py:983: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error: | |
Tensor-likes are not close! | |
Mismatched elements: 97297 / 98304 (99.0%) | |
Greatest absolute difference: 24.81045150756836 at index (0, 71, 4) (up to 1e-05 allowed) | |
Greatest relative difference: inf at index (0, 0, 5) (up to 1e-05 allowed) | |
_check_trace( |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
/home/prashant/dSHARK/shark.venv/lib/python3.9/site-packages/bert_pytorch/model/attention/single.py:16: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! | |
/ math.sqrt(query.size(-1)) | |
/home/prashant/dSHARK/shark.venv/lib/python3.9/site-packages/torch/jit/_trace.py:983: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error: | |
Tensor-likes are not close! | |
Mismatched elements: 97297 / 98304 (99.0%) | |
Greatest absolute difference: 24.81045150756836 at index (0, 71, 4) (up to 1e-05 allowed) | |
Greatest relative difference: inf at index (0, 0, 5) (up to 1e-05 allowed) | |
_check_trace( |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import torch | |
from shark.shark_runner import SharkInference | |
from bert_pytorch import BERT | |
torch.manual_seed(0) | |
class BERT_torch(torch.nn.Module): | |
def __init__(self): |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
graph(): | |
%params_1 : [#users=4] = placeholder[target=params_1] | |
%params_2 : [#users=4] = placeholder[target=params_2] | |
%optim_state_1 : [#users=0] = placeholder[target=optim_state_1] | |
%optim_state_2 : [#users=0] = placeholder[target=optim_state_2] | |
%optim_state_3 : [#users=0] = placeholder[target=optim_state_3] | |
%optim_state_4 : [#users=0] = placeholder[target=optim_state_4] | |
%optim_state_5 : [#users=0] = placeholder[target=optim_state_5] | |
%optim_state_6 : [#users=0] = placeholder[target=optim_state_6] | |
%optim_state_7 : [#users=0] = placeholder[target=optim_state_7] |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import torch | |
from functorch.compile import aot_function, nop | |
from functorch import make_fx | |
from torch.nn.utils import _stateless | |
from torchvision.models import resnet18 | |
class Foo(torch.nn.Module): | |
def __init__(self): | |
super().__init__() |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
graph(%self : __torch__.torch.fx.graph_module.f, | |
%params_1.1 : Tensor, | |
%params_2.1 : Tensor, | |
%args_1.1 : Tensor): | |
%90 : float = prim::Constant[value=-0.01]() # <eval_with_key>.2:26:59 | |
%57 : bool = prim::Constant[value=1]() # <eval_with_key>.2:17:46 | |
%26 : bool = prim::Constant[value=0]() # <eval_with_key>.2:9:132 | |
%115 : Device = prim::Constant[value="cpu"]() | |
%17 : NoneType = prim::Constant() | |
%23 : int = prim::Constant[value=6]() # <eval_with_key>.2:9:85 |