Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

onnx.Compress fail at 'arith.trunci' op operand type 'i1' and result type 'i32' are cast incompatible #898

Closed
AmosLewis opened this issue Jan 7, 2025 · 1 comment · Fixed by llvm/torch-mlir#3947
Assignees

Comments

@AmosLewis
Copy link
Contributor

AmosLewis commented Jan 7, 2025

Follow up with issue #893

For the given IR, it's failing to legalize: onnx.Scan

module {
  func.func @CNTKGraph(%311:!torch.vtensor<[1,?],f32>, %312:!torch.vtensor<[?],i1> ) -> (!torch.vtensor<[1],f32>)  attributes {torch.onnx_meta.ir_version = 4 : si64, torch.onnx_meta.opset_version = 21 : si64, torch.onnx_meta.opset_versions = {ai.onnx.ml = 1 : si64}, torch.onnx_meta.producer_name = "CNTK", torch.onnx_meta.producer_version = "2.7"} {
    %313 = torch.operator "onnx.Compress"(%311, %312) : (!torch.vtensor<[1,?],f32>, !torch.vtensor<[?],i1>) -> !torch.vtensor<[1],f32> 
    return %313: !torch.vtensor<[1],f32> 
  }
}

model impacted: 1

bidaf-9

Debug step

torch-mlir-opt -pass-pipeline='builtin.module(torch-onnx-to-torch-backend-pipeline{backend-legal-ops=aten.flatten.using_ints,aten.unflatten.int})' ./compress.onnx.mlir > compress.torch.mlir
(mlir_venv) (test_suite.venv) ➜  torch-mlir git:(scanfix) ✗ torch-mlir-opt -pass-pipeline='builtin.module(torch-backend-to-linalg-on-tensors-backend-pipeline)' compress.torch.mlir > compress.linalg.mlir
compress.torch.mlir:24:11: error: 'arith.trunci' op operand type 'i1' and result type 'i32' are cast incompatible
    %14 = torch.aten.scatter_add %13, %int0, %6, %9 : !torch.vtensor<[?],i1>, !torch.int, !torch.vtensor<[?],i1>, !torch.vtensor<[?],i1> -> !torch.vtensor<[?],i1>
          ^
compress.torch.mlir:24:11: note: see current operation: %109 = "arith.trunci"(%105) : (i1) -> i32
@AmosLewis
Copy link
Contributor Author

AmosLewis commented Jan 8, 2025

After the scatter_add fix, new issue

python run.py --torchtolinalg -v -t bidaf-9
Stages to be run: ['setup', 'import_model', 'preprocessing', 'compilation', 'construct_inputs', 'native_inference', 'compiled_inference', 'postprocessing']
Test list: ['bidaf-9']
running test bidaf-9...
python: /proj/gdba/shark/chi/src/torch-mlir/externals/llvm-project/mlir/include/mlir/IR/StorageUniquerSupport.h:180: static ConcreteT mlir::detail::StorageUserBase<mlir::RankedTensorType, mlir::TensorType, mlir::detail::RankedTensorTypeStorage, mlir::detail::TypeUniquer, mlir::ShapedType::Trait, mlir::ValueSemantics>::get(mlir::MLIRContext *, Args &&...) [ConcreteT = mlir::RankedTensorType, BaseT = mlir::TensorType, StorageT = mlir::detail::RankedTensorTypeStorage, UniquerT = mlir::detail::TypeUniquer, Traits = <mlir::ShapedType::Trait, mlir::ValueSemantics>, Args = <llvm::ArrayRef<long> &, mlir::Type &, mlir::Attribute &>]: Assertion `succeeded( ConcreteT::verifyInvariants(getDefaultDiagnosticEmitFn(ctx), args...))' failed.
[1]    415750 IOT instruction (core dumped)  python run.py --torchtolinalg -v -t bidaf-9

The issue from this op
%29 = torch.aten.remainder.Tensor %28, %20 : !torch.vtensor<[?,1],i1>, !torch.vtensor<[1],i1> -> !torch.vtensor<[1],i1>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant