Last active
September 14, 2022 23:18
-
-
Save AmosLewis/c52dd353b51ad730694a39fc8fc220fd to your computer and use it in GitHub Desktop.
debug ReduceSumDimInt
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
https://github.com/llvm/torch-mlir/blob/6c1dea1c0ff22efb7119f6453655b8b38b52e506/lib/Conversion/TorchToLinalg/Reduction.cpp#L409 | |
-> | |
result = convertScalarToDtype(rewriter, loc, result, mlir::IntegerType::get(op->getContext(), 64)); | |
➜ SHARK git:(bloom) ✗ python tank/bloom_model.py | |
Some weights of BloomForSequenceClassification were not initialized from the model checkpoint at bigscience/bloom-560m and are newly initialized: ['score.weight'] | |
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. | |
/home/chi/src/ubuntu20/shark/SHARK/shark.venv/lib/python3.10/site-packages/torch/jit/_check.py:181: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in `__init__`. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in `torch.jit.Attribute`. | |
warnings.warn("The TorchScript type system doesn't support " | |
/home/chi/src/ubuntu20/shark/SHARK/shark.venv/lib/python3.10/site-packages/torch/jit/_trace.py:744: UserWarning: The input to trace is already a ScriptModule, tracing it is a no-op. Returning the object as is. | |
warnings.warn( | |
Traceback (most recent call last): | |
File "/home/chi/src/ubuntu20/shark/SHARK/tank/bloom_model.py", line 95, in <module> | |
module = torch_mlir.compile( | |
File "/home/chi/src/ubuntu20/shark/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir/__init__.py", line 273, in compile | |
run_pipeline_with_repro_report( | |
File "/home/chi/src/ubuntu20/shark/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir/compiler_utils.py", line 73, in run_pipeline_with_repro_report | |
raise TorchMlirCompilerError(trimmed_message) from None | |
torch_mlir.compiler_utils.TorchMlirCompilerError: Lowering Torch Backend IR -> Linalg-on-Tensors Backend IR failed with the following diagnostics: | |
error: 'linalg.yield' op type of yield operand 1 ('i64') doesn't match the element type of the enclosing linalg.generic op ('i1') | |
note: see current operation: "linalg.yield"(%31762) : (i64) -> () | |
Error can be reproduced with: | |
$ torch-mlir-opt -pass-pipeline='torch-backend-to-linalg-on-tensors-backend-pipeline' /tmp/_lambda.mlir | |
Add '-mlir-print-ir-after-all -mlir-disable-threading' to get the IR dump for debugging purpose. | |
https://github.com/llvm/torch-mlir/blob/6c1dea1c0ff22efb7119f6453655b8b38b52e506/lib/Conversion/TorchToLinalg/Reduction.cpp#L409 | |
-> | |
result = convertScalarToDtype(rewriter, loc, result, mlir::IntegerType::get(op->getContext(), 1)); | |
➜ SHARK git:(bloom) ✗ python tank/bloom_model.py | |
...... | |
...... | |
x: array([128]) | |
y: array([ True])*; Dispatched function name: *aten.sum.dim_IntList*; Dispatched function args: *[torch.Size([1, 128]), [-1]]*; Dispatched function kwargs: *[]*; | |
warnings.warn( | |
Traceback (most recent call last): | |
File "/home/chi/src/ubuntu20/shark/SHARK/shark/torch_mlir_lockstep_tensor.py", line 130, in __torch_dispatch__ | |
eager_module = build_mlir_module(func, normalized_kwargs) | |
File "/home/chi/src/ubuntu20/shark/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir/eager_mode/ir_building.py", line 343, in build_mlir_module | |
assert len(annotations) == len( | |
AssertionError: Number of annotations and number of graph inputs differs. | |
During handling of the above exception, another exception occurred: | |
Traceback (most recent call last): | |
File "/home/chi/src/ubuntu20/shark/SHARK/tank/bloom_model.py", line 37, in <module> | |
output = model(eager_input_batch) # RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.FloatTensor instead (while checking arguments for embedding) | |
File "/home/chi/src/ubuntu20/shark/SHARK/shark.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl | |
return forward_call(*input, **kwargs) | |
File "/home/chi/src/ubuntu20/shark/SHARK/tank/bloom_model.py", line 23, in forward | |
return self.model.forward(tokens)[0] | |
File "/home/chi/src/ubuntu20/shark/SHARK/shark.venv/lib/python3.10/site-packages/transformers/models/bloom/modeling_bloom.py", line 955, in forward | |
sequence_lengths = torch.ne(input_ids, self.config.pad_token_id).sum(-1) - 1 | |
File "/home/chi/src/ubuntu20/shark/SHARK/shark.venv/lib/python3.10/site-packages/torch/_tensor.py", line 1270, in __torch_function__ | |
ret = func(*args, **kwargs) | |
File "/home/chi/src/ubuntu20/shark/SHARK/shark/torch_mlir_lockstep_tensor.py", line 198, in __torch_dispatch__ | |
out = func(*unwrapped_args, **unwrapped_kwargs) | |
File "/home/chi/src/ubuntu20/shark/SHARK/shark.venv/lib/python3.10/site-packages/torch/_ops.py", line 60, in __call__ | |
return self._op(*args, **kwargs or {}) | |
RuntimeError: Subtraction, the `-` operator, with a bool tensor is not supported. If you are trying to invert a mask, use the `~` or `logical_not()` operator instead. | |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment