Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SFT VLM] Added support for Molmo models via standalone script sft_vlm_molmo #2236

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

sergiopaniego
Copy link
Contributor

What does this PR do?

Fixes #2136.

This PR presents a standalone version for adding support to Molmo models. It may benefit from a generalization to be compatible with sft_vlm.py

This notebook has a reproducible version, both running the script or using code directly.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a GitHub issue? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines.
  • Did you write any new necessary tests?

Who can review?

@lewtun @edbeeching @qgallouedec

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@edbeeching
Copy link
Collaborator

HI @sergiopaniego , thanks for impementing this. Could you run make precommit to format the code so the quality tests pass (you may have to pip install pre-commit)

We are discussing internally how feasible it is to hormonize this script with the other VLM training scripts, I will let you know when we have a conclusion.

@sergiopaniego
Copy link
Contributor Author

Updated!

Any updates on the harmonization discussion? I’m happy to make any modifications needed! 😊

@mshuffett
Copy link

mshuffett commented Nov 4, 2024

@sergiopaniego so is this working in theory? Also OOM'ing for me needs 50 GB and my A100 only has like 40 GB or something. Is there a level I can pull to decrease the memory? Why does it need so much considering it is doing a LORA?

Is it possible to set this up to train on multiple GPUs?

@sergiopaniego
Copy link
Contributor Author

sergiopaniego commented Nov 17, 2024

@sergiopaniego so is this working in theory? Also OOM'ing for me needs 50 GB and my A100 only has like 40 GB or something. Is there a level I can pull to decrease the memory? Why does it need so much considering it is doing a LORA?

Is it possible to set this up to train on multiple GPUs?

Sorry for the late response @mshuffett. It still needs some polishing. While testing it, it seems like something is still missing from the artifacts for the model shared. You can see more details about it in the README. For example, since the grad-checkpoint is disabled, memory consumption increases a lot.
It's also not yet merged in the official transformers repo huggingface/transformers#33962

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[SFT VLM] Add support for Molmo models
4 participants