Skip to content

Instantly share code, notes, and snippets.

@AmosLewis
Last active October 25, 2022 19:14
Show Gist options
  • Save AmosLewis/77647fab452c7a6a2a9f275620172d76 to your computer and use it in GitHub Desktop.
Save AmosLewis/77647fab452c7a6a2a9f275620172d76 to your computer and use it in GitHub Desktop.
func.func @torch.aten.softmax.int$cst_dim(%t: !torch.vtensor<[2,3],f32>) -> !torch.vtensor<[2,3],f32> {
%none = torch.constant.none
%dim = torch.constant.int 1
%ret = torch.aten.softmax.int %t, %dim, %none : !torch.vtensor<[2,3],f32>, !torch.int, !torch.none -> !torch.vtensor<[2,3],f32>
return %ret : !torch.vtensor<[2,3],f32>
}
@AmosLewis
Copy link
Author

(mlir_venv) nod% torch-mlir-opt -convert-torch-to-tosa    /tmp/softmax.mlir  | externals/llvm-project/mlir/utils/generate-test-checks.py
 
// NOTE: Assertions have been autogenerated by utils/generate-test-checks.py

// The script is designed to make adding checks to
// a test case fast, it is *not* designed to be authoritative
// about what constitutes a good test! The CHECK should be
// minimized and named to reflect the test intent.



// CHECK-LABEL:   func.func @torch.aten.softmax.int$cst_dim(
// CHECK-SAME:                                              %[[VAL_0:.*]]: !torch.vtensor<[2,3],f32>) -> !torch.vtensor<[2,3],f32> {
// CHECK:           %[[VAL_1:.*]] = torch_c.to_builtin_tensor %[[VAL_0]] : !torch.vtensor<[2,3],f32> -> tensor<2x3xf32>
// CHECK:           %[[VAL_2:.*]] = torch.constant.none
// CHECK:           %[[VAL_3:.*]] = torch.constant.int 1
// CHECK:           %[[VAL_4:.*]] = "tosa.custom"(%[[VAL_1]]) {dim = 1 : i64, identifier = "softmax"} : (tensor<2x3xf32>) -> tensor<2x3xf32>
// CHECK:           %[[VAL_5:.*]] = torch_c.from_builtin_tensor %[[VAL_4]] : tensor<2x3xf32> -> !torch.vtensor<[2,3],f32>
// CHECK:           return %[[VAL_5]] : !torch.vtensor<[2,3],f32>
// CHECK:         }

@AmosLewis
Copy link
Author

(mlir_venv) nod% torch-mlir-opt -torch-backend-to-tosa-backend-pipeline    /tmp/softmax.mlir  | externals/llvm-project/mlir/utils/generate-test-checks.py

// NOTE: Assertions have been autogenerated by utils/generate-test-checks.py

// The script is designed to make adding checks to
// a test case fast, it is *not* designed to be authoritative
// about what constitutes a good test! The CHECK should be
// minimized and named to reflect the test intent.



// CHECK-LABEL:   func.func @torch.aten.softmax.int$cst_dim(
// CHECK-SAME:                                              %[[VAL_0:.*]]: tensor<2x3xf32>) -> tensor<2x3xf32> {
// CHECK:           %[[VAL_1:.*]] = "tosa.custom"(%[[VAL_0]]) {dim = 1 : i64, identifier = "softmax"} : (tensor<2x3xf32>) -> tensor<2x3xf32>
// CHECK:           return %[[VAL_1]] : tensor<2x3xf32>
// CHECK:         }

@AmosLewis
Copy link
Author

With pass patch:

➜  torch-mlir git:(tosa-custom-softmax) ✗ torch-mlir-opt -convert-torch-to-tosa    /tmp/softmax.mlir  | externals/llvm-project/mlir/utils/generate-test-checks.py

// NOTE: Assertions have been autogenerated by utils/generate-test-checks.py

// The script is designed to make adding checks to
// a test case fast, it is *not* designed to be authoritative
// about what constitutes a good test! The CHECK should be
// minimized and named to reflect the test intent.



// CHECK-LABEL:   func.func @torch.aten.softmax.int$cst_dim(
// CHECK-SAME:                                              %[[VAL_0:.*]]: !torch.vtensor<[2,3],f32>) -> !torch.vtensor<[2,3],f32> {
// CHECK:           %[[VAL_1:.*]] = torch.constant.none
// CHECK:           %[[VAL_2:.*]] = torch.constant.int 1
// CHECK:           %[[VAL_3:.*]] = torch.aten.softmax.int %[[VAL_0]], %[[VAL_2]], %[[VAL_1]] : !torch.vtensor<[2,3],f32>, !torch.int, !torch.none -> !torch.vtensor<[2,3],f32>
// CHECK:           return %[[VAL_3]] : !torch.vtensor<[2,3],f32>
// CHECK:         }

➜  torch-mlir git:(tosa-custom-softmax) ✗ torch-mlir-opt -convert-torch-to-tosa="custom-ops=torch.aten.softmax.int"    /tmp/softmax.mlir  | externals/llvm-project/mlir/utils/generate-test-checks.py

// NOTE: Assertions have been autogenerated by utils/generate-test-checks.py

// The script is designed to make adding checks to
// a test case fast, it is *not* designed to be authoritative
// about what constitutes a good test! The CHECK should be
// minimized and named to reflect the test intent.



// CHECK-LABEL:   func.func @torch.aten.softmax.int$cst_dim(
// CHECK-SAME:                                              %[[VAL_0:.*]]: !torch.vtensor<[2,3],f32>) -> !torch.vtensor<[2,3],f32> {
// CHECK:           %[[VAL_1:.*]] = torch_c.to_builtin_tensor %[[VAL_0]] : !torch.vtensor<[2,3],f32> -> tensor<2x3xf32>
// CHECK:           %[[VAL_2:.*]] = torch.constant.none
// CHECK:           %[[VAL_3:.*]] = torch.constant.int 1
// CHECK:           %[[VAL_4:.*]] = "tosa.custom"(%[[VAL_1]]) {dim = 1 : i64, identifier = "softmax"} : (tensor<2x3xf32>) -> tensor<2x3xf32>
// CHECK:           %[[VAL_5:.*]] = torch_c.from_builtin_tensor %[[VAL_4]] : tensor<2x3xf32> -> !torch.vtensor<[2,3],f32>
// CHECK:           return %[[VAL_5]] : !torch.vtensor<[2,3],f32>
// CHECK:         }

➜  torch-mlir git:(tosa-custom-softmax) ✗ 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment