-
Notifications
You must be signed in to change notification settings - Fork 315
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Performance] Improve performance of compiled ReplayBuffer #2529
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/rl/2529
Note: Links to docs will display an error until the docs builds have been completed. ❌ 3 New Failures, 5 Unrelated FailuresAs of commit b44d8f0 with merge base edbf3de (): NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
41192fe
to
3976594
Compare
# TODO: Without this disable, compiler recompiles for back-to-back calls. | ||
# Figuring out a way to avoid this disable would give better performance. | ||
@torch._dynamo.disable() | ||
def _rand_given_ndim(self, batch_size): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't been able to figure this one out yet, unfortunately. Even if I remove the len(storage)
and use self._len_value
directly, it still puts a guard around _len_value
. I even tried completely removing any code that seems special about _len_value
(the property decorator and anything related to mp.Value
), and it still puts a guard around it.
I can't seem to find anything in the output from TORCH_TRACE that tells exactly why there's a guard on it. I've attached a trace of running the benchmarks (python benchmarks/test_replaybuffer_benchmark.py -k test_rb_extend_sample
) after commenting out the @torch._dynamo.disable()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's ok, let's deactivate like you do here.
Perhaps we could isolate this and put a test around that graph break to make sure that if the compiler ends up hadling this properly we're aware of it and we can remove the disable() decorator?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea, I've added a test for that
3976594
to
0b03ad8
Compare
838a021
to
20aa59a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM thanks a mil.
Just a couple of quick fixes here and there and we're good to go.
Do you think we should add a benchmark for the E2E replay buffer interactions?
Another next step is to make sure compile works with MemoryMappedTensor from tensordict - I'm pretty sure it won't like it
# TODO: Without this disable, compiler recompiles for back-to-back calls. | ||
# Figuring out a way to avoid this disable would give better performance. | ||
@torch._dynamo.disable() | ||
def _rand_given_ndim(self, batch_size): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's ok, let's deactivate like you do here.
Perhaps we could isolate this and put a test around that graph break to make sure that if the compiler ends up hadling this properly we're aware of it and we can remove the disable() decorator?
Yeah I think that's a good idea, but I thought the benchmark I added was exercising the end-to-end interface, so I'm not entirely sure what you mean. Do you have a specific case in mind? Maybe something that actually uses the data for something other than just expand/sample? |
20aa59a
to
b44d8f0
Compare
(cherry picked from commit 2a07f4c)
Description
Performance improvements for compiled
ReplayBuffer.extend/sample
withLazyTensorStorage
. For the cases tested in the benchmark, compiled gives the same or better performance than eager.This uses the changes from #2426 to avoid using multiprocessing features, but also adds some other changes to avoid graph breaks in other places.
The benchmark and test are updated to include both cases where the storage's max size is reached and not reached.
Motivation and Context
Part of #2501
Types of changes
What types of changes does your code introduce? Remove all that do not apply:
Checklist
Go over all the following points, and put an
x
in all the boxes that apply.If you are unsure about any of these, don't hesitate to ask. We are here to help!