Skip to content

Instantly share code, notes, and snippets.

@AmosLewis
Last active September 14, 2022 17:22
Show Gist options
  • Save AmosLewis/355da227082ebb6bb3ca341cef5ec91d to your computer and use it in GitHub Desktop.
Save AmosLewis/355da227082ebb6bb3ca341cef5ec91d to your computer and use it in GitHub Desktop.
debug print torch_mlir_lockstep_tensor
python tank/bloom_model.py
with torch_mlir_lockstep_tensor.py
......
......
Mismatched elements: 131059 / 131072 (100%)
Max absolute difference: 2.3001497
Max relative difference: 2833.5105
x: array([[[-0.013242, -0.013242, -0.013242, ..., -0.013242, -0.013242,
-0.013242],
[-0.013242, -0.013242, -0.013242, ..., -0.013242, -0.013242,...
y: array([[[-0.013242, 0.050285, -0.015645, ..., 0.380386, 0.052017,
0.580691],
[-0.013242, 0.050285, -0.015645, ..., 0.380386, 0.052016,...*; Dispatched function name: *aten._reshape_alias.default*; Dispatched function args: *[torch.Size([1, 128, 16, 64]), [1, 128, 1024], [0, 0, 0]]*; Dispatched function kwargs: *[]*;
warnings.warn(
Target triple found:x86_64-linux-gnu
/home/chi/src/ubuntu20/shark/SHARK/shark/torch_mlir_lockstep_tensor.py:177: UserWarning: Lockstep accuracy verification failed with error: *
Not equal to tolerance rtol=0.0001, atol=1e-05
Mismatched elements: 131059 / 131072 (100%)
Max absolute difference: 2.3001497
Max relative difference: 2833.5105
x: array([[-0.013242, -0.013242, -0.013242, ..., -0.013242, -0.013242,
-0.013242],
[-0.013242, -0.013242, -0.013242, ..., -0.013242, -0.013242,...
y: array([[-0.013242, 0.050285, -0.015645, ..., 0.380386, 0.052017,
0.580691],
[-0.013242, 0.050285, -0.015645, ..., 0.380386, 0.052016,...*; Dispatched function name: *aten._reshape_alias.default*; Dispatched function args: *[torch.Size([1, 128, 1024]), [128, 1024], [0, 0]]*; Dispatched function kwargs: *[]*;
warnings.warn(
Target triple found:x86_64-linux-gnu
Target triple found:x86_64-linux-gnu
Target triple found:x86_64-linux-gnu
Target triple found:x86_64-linux-gnu
Target triple found:x86_64-linux-gnu
Target triple found:x86_64-linux-gnu
/home/chi/src/ubuntu20/shark/SHARK/shark/torch_mlir_lockstep_tensor.py:177: UserWarning: Lockstep accuracy verification failed with error: *
Not equal to tolerance rtol=0.0001, atol=1e-05
Mismatched elements: 524145 / 524288 (100%)
Max absolute difference: 29.939627
Max relative difference: 571538.06
x: array([[-0.136265, -0.136265, -0.136265, ..., -0.136265, -0.136265,
-0.136265],
[-0.136265, -0.136265, -0.136265, ..., -0.136265, -0.136265,...
y: array([[-0.136265, -0.023468, -0.122506, ..., -0.161708, -0.167916,
-0.079876],
[-0.136265, -0.023469, -0.122506, ..., -0.161708, -0.167916,...*; Dispatched function name: *aten._reshape_alias.default*; Dispatched function args: *[torch.Size([1, 128, 4096]), [128, 4096], [0, 0]]*; Dispatched function kwargs: *[]*;
warnings.warn(
Target triple found:x86_64-linux-gnu
Target triple found:x86_64-linux-gnu
Target triple found:x86_64-linux-gnu
Target triple found:x86_64-linux-gnu
/home/chi/src/ubuntu20/shark/SHARK/shark/torch_mlir_lockstep_tensor.py:177: UserWarning: Lockstep accuracy verification failed with error: *
Not equal to tolerance rtol=0.0001, atol=1e-05
Mismatched elements: 1 / 1 (100%)
Max absolute difference: 127
Max relative difference: 127.
x: array([128])
y: array([ True])*; Dispatched function name: *aten.sum.dim_IntList*; Dispatched function args: *[torch.Size([1, 128]), [-1]]*; Dispatched function kwargs: *[]*;
warnings.warn(
Traceback (most recent call last):
File "/home/chi/src/ubuntu20/shark/SHARK/shark/torch_mlir_lockstep_tensor.py", line 130, in __torch_dispatch__
eager_module = build_mlir_module(func, normalized_kwargs)
File "/home/chi/src/ubuntu20/shark/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir/eager_mode/ir_building.py", line 343, in build_mlir_module
assert len(annotations) == len(
AssertionError: Number of annotations and number of graph inputs differs.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/chi/src/ubuntu20/shark/SHARK/tank/bloom_model.py", line 37, in <module>
output = model(eager_input_batch) # RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.FloatTensor instead (while checking arguments for embedding)
File "/home/chi/src/ubuntu20/shark/SHARK/shark.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/chi/src/ubuntu20/shark/SHARK/tank/bloom_model.py", line 23, in forward
return self.model.forward(tokens)[0]
File "/home/chi/src/ubuntu20/shark/SHARK/shark.venv/lib/python3.10/site-packages/transformers/models/bloom/modeling_bloom.py", line 955, in forward
sequence_lengths = torch.ne(input_ids, self.config.pad_token_id).sum(-1) - 1
File "/home/chi/src/ubuntu20/shark/SHARK/shark.venv/lib/python3.10/site-packages/torch/_tensor.py", line 1270, in __torch_function__
ret = func(*args, **kwargs)
File "/home/chi/src/ubuntu20/shark/SHARK/shark/torch_mlir_lockstep_tensor.py", line 198, in __torch_dispatch__
out = func(*unwrapped_args, **unwrapped_kwargs)
File "/home/chi/src/ubuntu20/shark/SHARK/shark.venv/lib/python3.10/site-packages/torch/_ops.py", line 60, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Subtraction, the `-` operator, with a bool tensor is not supported. If you are trying to invert a mask, use the `~` or `logical_not()` operator instead.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment