Skip to content

Instantly share code, notes, and snippets.

@bjacob
Created November 4, 2021 18:47
Show Gist options
  • Save bjacob/35a4c4ada653c0f2100ac09344e83a76 to your computer and use it in GitHub Desktop.
Save bjacob/35a4c4ada653c0f2100ac09344e83a76 to your computer and use it in GitHub Desktop.
// -----// IR Dump After mlir::iree_compiler::Shape::(anonymous namespace)::ExpandFunctionDynamicDimsPass //----- //
module {
func @f(%arg0: !hal.buffer_view) -> !hal.buffer_view attributes {iree.abi.stub} {
%c5 = arith.constant 5 : index
%c3 = arith.constant 3 : index
%c0_i8 = arith.constant 0 : i8
%c0 = arith.constant 0 : index
%c1 = arith.constant 1 : index
%0 = hal.tensor.cast %arg0 : !hal.buffer_view -> tensor<1x1xi8>
%1 = linalg.pad_tensor %0 low[0, 0] high[%c5, %c3] {
^bb0(%arg1: index, %arg2: index): // no predecessors
linalg.yield %c0_i8 : i8
} : tensor<1x1xi8> to tensor<?x?xi8>
%2 = util.do_not_optimize(%1) : tensor<?x?xi8>
%3 = tensor.dim %2, %c0 : tensor<?x?xi8>
%4 = tensor.dim %2, %c1 : tensor<?x?xi8>
%5 = hal.tensor.cast %2 : tensor<?x?xi8>{%3, %4} -> !hal.buffer_view
return %5 : !hal.buffer_view
}
}
/tmp/a.mlir:10:8: error: 'util.do_not_optimize' op must have same operand and result types, but they differ at index 0
%1 = util.do_not_optimize(%0) : tensor<?x?xi8>
^
/tmp/a.mlir:1:1: note: called from
func @f(%arg0: tensor<1x1xi8>) -> tensor<?x?xi8> {
^
// -----// IR Dump After PadTensorToSubTensorInsert //----- //
module {
func @f(%arg0: !hal.buffer_view) -> !hal.buffer_view attributes {iree.abi.stub} {
%c0_i8 = arith.constant 0 : i8
%c0 = arith.constant 0 : index
%c1 = arith.constant 1 : index
%0 = hal.tensor.cast %arg0 : !hal.buffer_view -> tensor<1x1xi8>
%1 = linalg.init_tensor [6, 4] : tensor<6x4xi8>
%2 = linalg.fill(%c0_i8, %1) : i8, tensor<6x4xi8> -> tensor<6x4xi8>
%3 = tensor.insert_slice %0 into %2[0, 0] [%c1, %c1] [1, 1] : tensor<1x1xi8> into tensor<6x4xi8>
%4 = util.do_not_optimize(%3) : tensor<6x4xi8>
%5 = tensor.dim %4, %c0 : tensor<?x?xi8>
%6 = tensor.dim %4, %c1 : tensor<?x?xi8>
%7 = hal.tensor.cast %4 : tensor<?x?xi8>{%5, %6} -> !hal.buffer_view
return %7 : !hal.buffer_view
}
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment