[TorchFX] Do not propagate quantizers through __get_item__
when input/output has different numel
#3109
Labels
enhancement
New feature or request
🚀 Feature request
Some TorchFX operations are returning a tuple instead of a
torch.Tesnor
. In case an algorithm requests a statistic from such node nncf will raise an error liketuple has not is_empty method
. This affects theyolon11
model, current workaround is the ignored scope with the type of operation which returns a tuple:Solver debug illustration:
The task is to:
__getitem__
which have a tuple as an input (or__getitem__
which gets a piece of the input tenor, not the whole tensor) and do not propagate quantizers up through such operations.__getitem__
which inputnumel
is the same as outputnumel
should be marked as quantize-agnosticFeature Use Case
No response
Are you going to submit a PR?
The text was updated successfully, but these errors were encountered: