Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Muon Optimizer to contrib #1126

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

leloykun
Copy link

@leloykun leloykun commented Nov 2, 2024

Adds support for @KellerJordan's Muon optimizer as detailed here: https://kellerjordan.github.io/posts/muon/

This optimizer allowed us to reduce the training time of a GPT-2 level model down to just ~3.5 minutes on 8xH100s. See: https://github.com/KellerJordan/modded-nanogpt

image

This implementation also supports the adaptive version of Muon that can adapt to the scale of the gradients as they change during training. To enable this feature, simply set muon(adaptive=True, ...). See more at: https://github.com/leloykun/adaptive-muon


Muon is equivalent to Shampoo without accumulation. I.e., take Shampoo's update rule, discard the left and right preconditioners, then simplify the matrix operations. Alternatively, Muon can also be thought of as steepest descent under Spectral norm. For more details, please refer to @jxbz's paper https://arxiv.org/pdf/2409.20325.

More generally, steepest descent under Schatten-p norm, for some large p, can be thought of as variants of Muon (note that Schatten-infty norm = Spectral norm). A (rough) proof of this claim can be found here: https://x.com/leloykun/status/1846842887839125941. This implementation supports these variants by allowing the user to pass in custom coefficients for the newton-schulz iterations.

@leloykun leloykun changed the title Add support for the muon optimizer Add Muon Optimizer to contrib Nov 2, 2024
@leloykun
Copy link
Author

leloykun commented Nov 3, 2024

Hi all!

How do I restrict the tests to exclude vectors as weights?

@vroulet
Copy link
Collaborator

vroulet commented Nov 4, 2024

Hello @leloykun,

Looks like an interesting contribution thank you!

How do I restrict the tests to exclude vectors as weights?

I don't fully understand your question. Could you give more context?

Other questions/remarks:

How does this optimizer treat vector-shaped values? The"muon_iterator" could run on vectors but not return what you want, so how do you make the distinction? Should it take a mask to only apply on matrices? Should it raise value error?

Also could you add the mathematical description in the doc string? That would greatly help.

Finally, put references at the end of the docstring (we'll adopt this format with #1129)

Thank you!

@leloykun
Copy link
Author

leloykun commented Nov 4, 2024

Hi @vroulet,

Muon is only defined for matrix-shaped values.

I'm thinking of raising an error when the input is vector-shaped, but where's the best place to put it? If there are other optimizers here that does this, can you point me to them?

@vroulet
Copy link
Collaborator

vroulet commented Nov 11, 2024

Hello @leloykun

  • I would put a test in the init function.
  • What is the algorithm for? Most architectures have non-matrix shaped parameters (for example just the scalar parameters of a layer norm). If the algorithm is specific to some problem, could you make a quick notebook explaining the setup? (Is it for example for finetuning with LoRa?)
  • In any case, some doctest would be good.
  • For the tests, you may add a synthetic linear classification problem with centered data to test the algorithm for example (again if you have a specific setup let me know).

Thanks again and sorry for the delay!

@vroulet
Copy link
Collaborator

vroulet commented Nov 26, 2024

Hey @leloykun,

Looking at the original repository, it seems that the best course of action is to:

  • keep the scale_by_muon as you did
  • make muon as a partition instance where the partition is inferred from the dimensions of the leaves in the optimizer
  • add in the tests a simple mlp and make all optimizers pass it

Please let me know if you are willing to pursue the PR

@rdyro
Copy link
Collaborator

rdyro commented Jan 2, 2025

Hi @leloykun, I'll be happy to take over merging this PR on our end, let me know once you want me to take a look at the changes / start the merge process.

@leloykun
Copy link
Author

leloykun commented Jan 2, 2025

Hi @rdyro! Thank you so much for the help. This PR should now be ready for review. I've also already addressed @vroulet's concerns. Please let me know if there is anything else I need to do on my end.

optax/contrib/_muon.py Outdated Show resolved Hide resolved
optax/contrib/_muon.py Outdated Show resolved Hide resolved
optax/contrib/_muon.py Outdated Show resolved Hide resolved
optax/contrib/_muon.py Outdated Show resolved Hide resolved
Copy link
Collaborator

@rdyro rdyro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you introduce these cosmetic changes? The PR looks really good!

@rdyro
Copy link
Collaborator

rdyro commented Jan 2, 2025

Hi @rdyro! Thank you so much for the help. This PR should now be ready for review. I've also already addressed @vroulet's concerns. Please let me know if there is anything else I need to do on my end.

Hey, I just reviewed the PR, it looks great, thanks for addressing @vroulet's comments. I left some cosmetic comments.

Once you get those in, can you squash your commits please?

@rdyro
Copy link
Collaborator

rdyro commented Jan 2, 2025

I'm sorry, I mislead you. Because we're supporting Python 3.9 Union and Tuple are necessary, exactly like you had it (having Optional would be ok too, but you can leave it as Union[A, None]). I'll start the merge once you can revert those Union/Tuple changes.

@leloykun
Copy link
Author

leloykun commented Jan 2, 2025

@rdyro, I've just addressed your nits and squashed the commits.

I kept the Union & Optional type hints tho as support for | was only added in v3.10 instead of v3.9... See: https://peps.python.org/pep-0604/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants