-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to force layernorm to run at FP32 precision using python #4225
Comments
@OswaldoBornemann see #3897 (comment) Exporting the model to the latest available ONNX opset (later than opset 17) to use the INormalizationLayer, or forcing layernorm layers to run in FP32 precision can help with preserving accuracy. |
|
Do you use |
No, I use the following code to convert pytorch model to onnx
|
Then use trtexec --fp16 to run. If LN not run in fp16, use follow
|
Thank you |
How to force layernorm to run at FP32 precision?
The text was updated successfully, but these errors were encountered: