We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Follow up with issue #893
For the given IR, it's failing to legalize: onnx.Scan
module { func.func @CNTKGraph(%311:!torch.vtensor<[1,?],f32>, %312:!torch.vtensor<[?],i1> ) -> (!torch.vtensor<[1],f32>) attributes {torch.onnx_meta.ir_version = 4 : si64, torch.onnx_meta.opset_version = 21 : si64, torch.onnx_meta.opset_versions = {ai.onnx.ml = 1 : si64}, torch.onnx_meta.producer_name = "CNTK", torch.onnx_meta.producer_version = "2.7"} { %313 = torch.operator "onnx.Compress"(%311, %312) : (!torch.vtensor<[1,?],f32>, !torch.vtensor<[?],i1>) -> !torch.vtensor<[1],f32> return %313: !torch.vtensor<[1],f32> } }
model impacted: 1
bidaf-9
Debug step
torch-mlir-opt -pass-pipeline='builtin.module(torch-onnx-to-torch-backend-pipeline{backend-legal-ops=aten.flatten.using_ints,aten.unflatten.int})' ./compress.onnx.mlir > compress.torch.mlir (mlir_venv) (test_suite.venv) ➜ torch-mlir git:(scanfix) ✗ torch-mlir-opt -pass-pipeline='builtin.module(torch-backend-to-linalg-on-tensors-backend-pipeline)' compress.torch.mlir > compress.linalg.mlir compress.torch.mlir:24:11: error: 'arith.trunci' op operand type 'i1' and result type 'i32' are cast incompatible %14 = torch.aten.scatter_add %13, %int0, %6, %9 : !torch.vtensor<[?],i1>, !torch.int, !torch.vtensor<[?],i1>, !torch.vtensor<[?],i1> -> !torch.vtensor<[?],i1> ^ compress.torch.mlir:24:11: note: see current operation: %109 = "arith.trunci"(%105) : (i1) -> i32
The text was updated successfully, but these errors were encountered:
After the scatter_add fix, new issue
python run.py --torchtolinalg -v -t bidaf-9 Stages to be run: ['setup', 'import_model', 'preprocessing', 'compilation', 'construct_inputs', 'native_inference', 'compiled_inference', 'postprocessing'] Test list: ['bidaf-9'] running test bidaf-9... python: /proj/gdba/shark/chi/src/torch-mlir/externals/llvm-project/mlir/include/mlir/IR/StorageUniquerSupport.h:180: static ConcreteT mlir::detail::StorageUserBase<mlir::RankedTensorType, mlir::TensorType, mlir::detail::RankedTensorTypeStorage, mlir::detail::TypeUniquer, mlir::ShapedType::Trait, mlir::ValueSemantics>::get(mlir::MLIRContext *, Args &&...) [ConcreteT = mlir::RankedTensorType, BaseT = mlir::TensorType, StorageT = mlir::detail::RankedTensorTypeStorage, UniquerT = mlir::detail::TypeUniquer, Traits = <mlir::ShapedType::Trait, mlir::ValueSemantics>, Args = <llvm::ArrayRef<long> &, mlir::Type &, mlir::Attribute &>]: Assertion `succeeded( ConcreteT::verifyInvariants(getDefaultDiagnosticEmitFn(ctx), args...))' failed. [1] 415750 IOT instruction (core dumped) python run.py --torchtolinalg -v -t bidaf-9
The issue from this op %29 = torch.aten.remainder.Tensor %28, %20 : !torch.vtensor<[?,1],i1>, !torch.vtensor<[1],i1> -> !torch.vtensor<[1],i1>
%29 = torch.aten.remainder.Tensor %28, %20 : !torch.vtensor<[?,1],i1>, !torch.vtensor<[1],i1> -> !torch.vtensor<[1],i1>
Sorry, something went wrong.
llvm/torch-mlir@a45356e
AmosLewis
Successfully merging a pull request may close this issue.
Follow up with issue #893
For the given IR, it's failing to legalize: onnx.Scan
model impacted: 1
Debug step
The text was updated successfully, but these errors were encountered: