-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
⚡️ Add torch.compile
to PatchPredictor
#776
⚡️ Add torch.compile
to PatchPredictor
#776
Conversation
…b.com/TissueImageAnalytics/tiatoolbox into enhance-torch-compile-patch-predictor
At the moment the model is compiled before it is sent to the GPU (if GPU is being used). I think at least some of what torch.compile does is device-aware, so it may be better to compile after it is sent to device. Have you tried testing if the ordering makes a difference? |
Agree, this should be checked. |
Please enable tests for this branch. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## enhance-torch-compile #776 +/- ##
======================================================
Coverage 99.89% 99.89%
======================================================
Files 69 69
Lines 8578 8589 +11
Branches 1641 1642 +1
======================================================
+ Hits 8569 8580 +11
Misses 1 1
Partials 8 8 ☔ View full report in Codecov by Sentry. |
@measty I believe the model is compiled only when the |
@shaneahmed
Should we disable |
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
…b.com/TissueImageAnalytics/tiatoolbox into enhance-torch-compile-patch-predictor
for more information, see https://pre-commit.ci
…b.com/TissueImageAnalytics/tiatoolbox into enhance-torch-compile-patch-predictor
Compiled model. | ||
|
||
""" | ||
if disable: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not think we need this variable. I think we should only call this function if not disable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @shaneahmed. That could be done, too. However, I'm mirroring the PyTorch implementation, which includes a disable
flag in the function (torch.compile).
self.model = ( | ||
compile_model( # for runtime, such as after wrapping with nn.DataParallel | ||
model, | ||
mode=rcParam["torch_compile_mode"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need rcparam for this? We can just set this as kwargs argument in the engines.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @shaneahmed. Having kwargs for torch_compile_mode
would work, too. I may suggest to keep rcParam for now until we implement it in the new engine design. Happy to discuss it in our next meeting.
This mini-PR adds
torch.compile
functionality toPatchPredictor
.