Last active
November 27, 2023 22:18
-
-
Save AmosLewis/0fd641265f8a4758485815c9827d3d2d to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
func.func private @forward(%arg0: !torch.vtensor<[20,100,35,45],f32>) -> !torch.vtensor<[20,100,35,45],f32> { | |
%int0 = torch.constant.int 0 | |
%0 = torch.prim.ListConstruct %int0 : (!torch.int) -> !torch.list<int> | |
%int0_0 = torch.constant.int 0 | |
%int0_1 = torch.constant.int 0 | |
%cpu = torch.constant.device "cpu" | |
%none = torch.constant.none | |
%none_2 = torch.constant.none | |
%1 = torch.aten.aten.empty.memory_format %0, %int0_0, %int0_1, %cpu, %none, %none_2 : !torch.list<int>, !torch.int, !torch.int, !torch.Device, !torch.none, !torch.none -> !torch.vtensor<[0],ui8> | |
return %arg0 : !torch.vtensor<[20,100,35,45],f32> | |
} |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
This issue is found from nod-ai/SHARK-ModelDev#110.
torch.empty((0))
torch-mlir-opt empty-memory_format_test.mlir -convert-torch-to-linalg --debug
The bug is because of the
tensor.cast
ui8 to i8 is not supported, which is probably because the mis-use of tensor.cast input/result element types.In the definition of tensor.cast op:
As we can see in the input mlir file, the result type should be
ui8
. As we look at the code,The wrong
i8
issue arises from the typeConverter, it wrongly converts the ui8 to i8.The code link is here https://github.com/llvm/torch-mlir/blob/d50d3aa5e77117fbb7078c25831ea2913a1c5566/lib/Conversion/TorchToLinalg/TensorConstructors.cpp#L226C15-L226C15