Skip to content

Instantly share code, notes, and snippets.

@AmosLewis
Last active November 27, 2023 22:18
Show Gist options
  • Save AmosLewis/0fd641265f8a4758485815c9827d3d2d to your computer and use it in GitHub Desktop.
Save AmosLewis/0fd641265f8a4758485815c9827d3d2d to your computer and use it in GitHub Desktop.
func.func private @forward(%arg0: !torch.vtensor<[20,100,35,45],f32>) -> !torch.vtensor<[20,100,35,45],f32> {
%int0 = torch.constant.int 0
%0 = torch.prim.ListConstruct %int0 : (!torch.int) -> !torch.list<int>
%int0_0 = torch.constant.int 0
%int0_1 = torch.constant.int 0
%cpu = torch.constant.device "cpu"
%none = torch.constant.none
%none_2 = torch.constant.none
%1 = torch.aten.aten.empty.memory_format %0, %int0_0, %int0_1, %cpu, %none, %none_2 : !torch.list<int>, !torch.int, !torch.int, !torch.Device, !torch.none, !torch.none -> !torch.vtensor<[0],ui8>
return %arg0 : !torch.vtensor<[20,100,35,45],f32>
}
@AmosLewis
Copy link
Author

AmosLewis commented Nov 21, 2023

This issue is found from nod-ai/SHARK-ModelDev#110.
torch.empty((0))
torch-mlir-opt empty-memory_format_test.mlir -convert-torch-to-linalg --debug

// *** IR Dump After Pattern Application ***
mlir-asm-printer: Verifying operation: func.func
ImplicitTypeIDRegistry::lookupOrInsert(mlir::OpTrait::OneTypedResult<mlir::TensorType>::Impl<Empty>)
'tensor.cast' op operand type 'tensor<?xui8>' and result type 'tensor<0xi8>' are cast incompatible
mlir-asm-printer: 'func.func' failed to verify and will be printed in generic form
"func.func"() <{function_type = (!torch.vtensor<[20,100,35,45],f32>) -> !torch.vtensor<[20,100,35,45],f32>, sym_name = "forward", sym_visibility = "private"}> ({
^bb0(%arg0: !torch.vtensor<[20,100,35,45],f32>):
  %0 = "torch.constant.int"() <{value = 0 : i64}> : () -> !torch.int
  %1 = "torch.prim.ListConstruct"(%0) : (!torch.int) -> !torch.list<int>
  %2 = "torch.constant.int"() <{value = 0 : i64}> : () -> !torch.int
  %3 = "builtin.unrealized_conversion_cast"(%2) : (!torch.int) -> i64
  %4 = "torch.constant.int"() <{value = 0 : i64}> : () -> !torch.int
  %5 = "builtin.unrealized_conversion_cast"(%4) : (!torch.int) -> i64
  %6 = "torch.constant.device"() <{value = "cpu"}> : () -> !torch.Device
  %7 = "torch.constant.none"() : () -> !torch.none
  %8 = "torch.constant.none"() : () -> !torch.none
  %9 = "torch_c.to_i64"(%0) : (!torch.int) -> i64
  %10 = "arith.index_cast"(%9) : (i64) -> index
  %11 = "tensor.empty"(%10) : (index) -> tensor<?xui8>
  %12 = "tensor.cast"(%11) : (tensor<?xui8>) -> tensor<0xi8>
  %13 = "torch.aten.empty.memory_format"(%1, %2, %4, %6, %7, %8) : (!torch.list<int>, !torch.int, !torch.int, !torch.Device, !torch.none, !torch.none) -> !torch.vtensor<[0],ui8>
  "func.return"(%arg0) : (!torch.vtensor<[20,100,35,45],f32>) -> ()
}) : () -> ()

The bug is because of the tensor.cast ui8 to i8 is not supported, which is probably because the mis-use of tensor.cast input/result element types.
In the definition of tensor.cast op:

Convert a tensor from one type to an equivalent type without changing any
data elements. The source and destination types must both be tensor types
with the same element type.

As we can see in the input mlir file, the result type should be ui8. As we look at the code,
The wrong i8 issue arises from the typeConverter, it wrongly converts the ui8 to i8.

    op.getType().dump();// !torch.vtensor<[0],ui8>
    auto resultType =
        typeConverter->convertType(op.getType()).cast<RankedTensorType>();
    resultType.dump();// tensor<0xi8>

The code link is here https://github.com/llvm/torch-mlir/blob/d50d3aa5e77117fbb7078c25831ea2913a1c5566/lib/Conversion/TorchToLinalg/TensorConstructors.cpp#L226C15-L226C15

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment