Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Performance] Improve performance of compiled ReplayBuffer #2529

Merged
merged 1 commit into from
Nov 4, 2024

Conversation

kurtamohler
Copy link
Collaborator

@kurtamohler kurtamohler commented Oct 31, 2024

Description

Performance improvements for compiled ReplayBuffer.extend/sample with LazyTensorStorage. For the cases tested in the benchmark, compiled gives the same or better performance than eager.

This uses the changes from #2426 to avoid using multiprocessing features, but also adds some other changes to avoid graph breaks in other places.

The benchmark and test are updated to include both cases where the storage's max size is reached and not reached.

Motivation and Context

Part of #2501

  • I have raised an issue to propose this change (required for new features and bug fixes)

Types of changes

What types of changes does your code introduce? Remove all that do not apply:

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds core functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)
  • Example (update in the folder of examples)

Checklist

Go over all the following points, and put an x in all the boxes that apply.
If you are unsure about any of these, don't hesitate to ask. We are here to help!

  • I have read the CONTRIBUTION guide (required)
  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have updated the documentation accordingly.

Copy link

pytorch-bot bot commented Oct 31, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/rl/2529

Note: Links to docs will display an error until the docs builds have been completed.

❌ 3 New Failures, 5 Unrelated Failures

As of commit b44d8f0 with merge base edbf3de (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 31, 2024
Comment on lines +553 to 559
# TODO: Without this disable, compiler recompiles for back-to-back calls.
# Figuring out a way to avoid this disable would give better performance.
@torch._dynamo.disable()
def _rand_given_ndim(self, batch_size):
Copy link
Collaborator Author

@kurtamohler kurtamohler Oct 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't been able to figure this one out yet, unfortunately. Even if I remove the len(storage) and use self._len_value directly, it still puts a guard around _len_value. I even tried completely removing any code that seems special about _len_value (the property decorator and anything related to mp.Value), and it still puts a guard around it.

I can't seem to find anything in the output from TORCH_TRACE that tells exactly why there's a guard on it. I've attached a trace of running the benchmarks (python benchmarks/test_replaybuffer_benchmark.py -k test_rb_extend_sample) after commenting out the @torch._dynamo.disable().

tl_out.zip

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's ok, let's deactivate like you do here.
Perhaps we could isolate this and put a test around that graph break to make sure that if the compiler ends up hadling this properly we're aware of it and we can remove the disable() decorator?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea, I've added a test for that

@vmoens vmoens added the performance Performance issue or suggestion for improvement label Oct 31, 2024
@kurtamohler kurtamohler force-pushed the ReplayBuffer-compile-0 branch 2 times, most recently from 838a021 to 20aa59a Compare November 1, 2024 01:43
Copy link
Contributor

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM thanks a mil.
Just a couple of quick fixes here and there and we're good to go.
Do you think we should add a benchmark for the E2E replay buffer interactions?
Another next step is to make sure compile works with MemoryMappedTensor from tensordict - I'm pretty sure it won't like it

Comment on lines +553 to 559
# TODO: Without this disable, compiler recompiles for back-to-back calls.
# Figuring out a way to avoid this disable would give better performance.
@torch._dynamo.disable()
def _rand_given_ndim(self, batch_size):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's ok, let's deactivate like you do here.
Perhaps we could isolate this and put a test around that graph break to make sure that if the compiler ends up hadling this properly we're aware of it and we can remove the disable() decorator?

torchrl/_utils.py Show resolved Hide resolved
torchrl/data/replay_buffers/replay_buffers.py Outdated Show resolved Hide resolved
torchrl/data/replay_buffers/replay_buffers.py Outdated Show resolved Hide resolved
@kurtamohler
Copy link
Collaborator Author

kurtamohler commented Nov 1, 2024

Do you think we should add a benchmark for the E2E replay buffer interactions?

Yeah I think that's a good idea, but I thought the benchmark I added was exercising the end-to-end interface, so I'm not entirely sure what you mean. Do you have a specific case in mind? Maybe something that actually uses the data for something other than just expand/sample?

@kurtamohler kurtamohler merged commit 2a07f4c into pytorch:main Nov 4, 2024
72 of 80 checks passed
vmoens pushed a commit that referenced this pull request Nov 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. performance Performance issue or suggestion for improvement
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants