Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TorchFX] Do not propagate quantizers through __get_item__ when input/output has different numel #3109

Open
1 task
daniil-lyakhov opened this issue Nov 25, 2024 · 0 comments
Labels
enhancement New feature or request

Comments

@daniil-lyakhov
Copy link
Collaborator

daniil-lyakhov commented Nov 25, 2024

🚀 Feature request

Some TorchFX operations are returning a tuple instead of a torch.Tesnor. In case an algorithm requests a statistic from such node nncf will raise an error like tuple has not is_empty method. This affects the yolon11 model, current workaround is the ignored scope with the type of operation which returns a tuple:

nncf.IgnoredScope(types=["__getitem__"])

Solver debug illustration:
image

The task is to:

  • Find a way to automatically detect __getitem__ which have a tuple as an input (or __getitem__ which gets a piece of the input tenor, not the whole tensor) and do not propagate quantizers up through such operations. __getitem__ which input numel is the same as output numel should be marked as quantize-agnostic

Feature Use Case

No response

Are you going to submit a PR?

  • Yes I'd like to help by submitting a PR!
@daniil-lyakhov daniil-lyakhov added the enhancement New feature or request label Nov 25, 2024
@daniil-lyakhov daniil-lyakhov changed the title [TorchFX] Support output port id [TorchFX] Do not propagate quantizers through __get_item__ when input/output has different numel Nov 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant