-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Concat scales not being grouped #2195
Comments
Greetings, @basioli-k! Thanks for spotting this, and for the detailed reproducer that makes debugging this a breeze. The unexpected behaviour seems to be due to some logic introduced in #1778. If I comment the following lines: nncf/nncf/common/quantization/quantizer_propagation/graph.py Lines 831 to 832 in 4d47869
the input quantizers in both your cases get unified. We added that logic (only unifying concat scales if concat is followed by a weighted op) in response to low PTQ accuracy in densenet and inception, but IMO the concat input quantizers in the per-tensor case should be unified regardless of the ops that follow the concat. Will investigate how to best fix this on the develop branch. |
Thank you for the response. |
Ref. 138683 |
I am trying to quantize a pytorch model using NNCF.
The output of my model is a concatenation of two tensors.
To quantize my outputs I set:
advanced_parameters = AdvancedQuantizationParameters(quantize_outputs=True)
When I quantize the model I get a separate quantizer for each input:
Based on what I saw in NNCF I would expect to get something like this.
I am guessing it's an edge case which comes up due to
AdvancedQuantizationParameters
.NNCF version: 2.6.0
Run the following to reproduce:
The text was updated successfully, but these errors were encountered: