-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Release/2.4] Remove amax_ptr from scaled_gemm for UT test_scaled_mm_vs_emulated_*float*_cuda #1735
Conversation
amax was removed from _scaled_mm by pytorch#128683. Remove it from the internal at::cuda::blas::scaled_gemm, as well. This allows hipBLASLt to find additional solutions rather than forcing amax to be used and then discarding the result. Pull Request resolved: pytorch#135421 Approved by: https://github.com/drisspg, https://github.com/eqy
Jenkins build for 73cde4797b8d7891d24ca29f0ebfece573cc3538 commit finished as FAILURE Detected error during Pytorch building:
|
Jenkins build for 73cde4797b8d7891d24ca29f0ebfece573cc3538 commit finished as FAILURE Detected error during Pytorch building:
|
Jenkins build for 73cde4797b8d7891d24ca29f0ebfece573cc3538 commit finished as FAILURE Detected error during Pytorch building:
|
Jenkins build for 73cde4797b8d7891d24ca29f0ebfece573cc3538 commit finished as FAILURE |
Jenkins build for 73cde4797b8d7891d24ca29f0ebfece573cc3538 commit finished as FAILURE |
is this PR required ? |
1742 solves both issues. So, closing this PR. |
Cherry pick - 39a6179
remove amax_ptr from scaled_gemm (pytorch#135421)
amax was removed from _scaled_mm by pytorch#128683. Remove it from the internal at::cuda::blas::scaled_gemm, as well. This allows hipBLASLt to find additional solutions rather than forcing amax to be used and then discarding the result.