We suffer more in imagination than in reality - - Seneca
Right now
- fear:
- prevent by:
- repair by:
- fear:
| OVERVIEW: IREE compilation driver | |
| USAGE: iree-compile [options] <input file or '-' for stdin> | |
| OPTIONS: | |
| CUDA HAL Target: | |
| --iree-hal-cuda-dump-ptx - Dump ptx to the debug stream. | |
| --iree-hal-cuda-llvm-target-arch=<string> - LLVM target chip. |
| ~/torch-mlir/build/bin/torch-mlir-opt --convert-torch-to-linalg --convert-torch-to-tmtensor --debug -mlir-disable-threading -mlir-print-ir-after-all ./stripped-opt-125M.fp32.onnx.torch.mlir &> /tmp/torchopt.out | |
| /home/azureuser/torch-mlir/build/bin/torch-mlir-opt: /home/azureuser/miniconda/lib/libtinfo.so.6: no version information available (required by /home/azureuser/torch-mlir/build/bin/torch-mlir-opt) | |
| Args: /home/azureuser/torch-mlir/build/bin/torch-mlir-opt --convert-torch-to-linalg --convert-torch-to-tmtensor --debug -mlir-disable-threading -mlir-print-ir-after-all ./stripped-opt-125M.fp32.onnx.torch.mlir | |
| Load new dialect in Context builtin | |
| ImplicitTypeIDRegistry::lookupOrInsert(mlir::ShapedType) | |
| ImplicitTypeIDRegistry::lookupOrInsert(mlir::MemRefLayoutAttrInterface) | |
| ImplicitTypeIDRegistry::lookupOrInsert(mlir::TypedAttr) | |
| ImplicitTypeIDRegistry::lookupOrInsert(mlir::ElementsAttr) |
| /home/azureuser/iree-build/tools/iree-compile: /home/azureuser/miniconda/lib/libtinfo.so.6: no version information available (required by /home/azureuser/iree-build/lib/libIREECompiler.so) | |
| iree-compile: iree/third_party/llvm-project/llvm/include/llvm/Support/Casting.h:566: decltype(auto) llvm::cast(const From &) [To = mlir::DenseElementsAttr, From = mlir::Attribute]: Assertion `isa<To>(Val) && "cast<Ty>() argument of incompatible type!"' failed. | |
| Please report issues to https://github.com/openxla/iree/issues and include the crash backtrace. | |
| Stack dump: | |
| 0. Program arguments: /home/azureuser/iree-build/tools/iree-compile --iree-hal-target-backends=llvm-cpu opt-125M.fp32.onnx.torch.mlir | |
| Stack dump without symbol names (ensure you have llvm-symbolizer in your PATH or set the environment var `LLVM_SYMBOLIZER_PATH` to point to it): | |
| 0 libIREECompiler.so 0x00007fed01436997 llvm::sys::PrintStackTrace(llvm::raw_ostream&, int) + 39 | |
| 1 libIREECompiler.so 0x00007fed01434bc0 llvm::sys::RunSignalHandlers() + 80 | |
| 2 libIREECo |
| /home/azureuser/iree-build/tools/iree-compile: /home/azureuser/miniconda/lib/libtinfo.so.6: no version information available (required by /home/azureuser/iree-build/lib/libIREECompiler.so) | |
| Args: /home/azureuser/iree-build/tools/iree-compile --iree-hal-target-backends=llvm-cpu -o output.vmfb stripped-opt-125M.fp32.onnx.torch.mlir --debug | |
| Load new dialect in Context builtin | |
| ImplicitTypeIDRegistry::lookupOrInsert(mlir::ShapedType) | |
| ImplicitTypeIDRegistry::lookupOrInsert(mlir::MemRefLayoutAttrInterface) | |
| ImplicitTypeIDRegistry::lookupOrInsert(mlir::TypedAttr) | |
| ImplicitTypeIDRegistry::lookupOrInsert(mlir::ElementsAttr) | |
| ImplicitTypeIDRegistry::lookupOrInsert(mlir::DistinctAttr) | |
| ImplicitTypeIDRegistry::lookupOrInsert(mlir::BytecodeOpInterface) | |
| ImplicitTypeIDRegistry::lookupOrInsert(mlir::SymbolOpInterface) |
| import onnx | |
| import numpy as np | |
| from onnx import numpy_helper, TensorProto, save_model | |
| from onnx.helper import make_model, make_node, make_graph, make_tensor_value_info | |
| from onnx.checker import check_model | |
| # condition has to be a float tensor | |
| condition = make_tensor_value_info('condition', TensorProto.FLOAT, [1]) |
| import gc | |
| import sys | |
| import torch | |
| import torch_mlir | |
| batch_size = 1 | |
| seq_len = 3 | |
| input_size = 5 | |
| hidden_size = 5 | |
| kernel_size = 3 |
| Add | |
| AveragePool | |
| BatchNormalization | |
| Cast | |
| Clip | |
| Concat | |
| Constant | |
| ConstantOfShape | |
| Conv |
| // torch-mlir/.vscode/settings.json | |
| { | |
| "files.associations": { | |
| "*.inc": "cpp", | |
| "ranges": "cpp", | |
| "regex": "cpp", | |
| "functional": "cpp", | |
| "chrono": "cpp", | |
| "__functional_03": "cpp", | |
| "target": "cpp", |
| OVERVIEW: MLIR modular optimizer driver | |
| Available Dialects: builtin, chlo, complex, func, linalg, memref, ml_program, scf, sparse_tensor, stablehlo, tensor, tm_tensor, torch, torch_c, tosa, vhlo | |
| USAGE: torch-mlir-opt [options] <input file> | |
| OPTIONS: | |
| Color Options: | |
| --color - Use colors in output (default=autodetect) |