Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch.OutOfMemoryError on EC2 p3.8xlarge with HunyuanVideo Preprocessing Command #132

Open
kmlFaouzi opened this issue Dec 16, 2024 · 3 comments

Comments

@kmlFaouzi
Copy link

kmlFaouzi commented Dec 16, 2024

Description:
I'm encountering a torch.OutOfMemoryError while running a preprocessing command for the HunyuanVideo model on an AWS EC2 instance.

Environment:

  • Instance Type: p3.8xlarge
  • Operating System: Linux/UNIX (Ubuntu)
  • GPUs: 4 x NVIDIA Tesla V100
  • GPU Memory: 16 GiB per GPU

Model Details:

  • Model: HunyuanVideo
  • Requirements:
    • Settings (height/width/frame): 720px x 1280px x 129f
    • GPU Peak Memory: 60GB (as stated in the README)

Steps to Reproduce:

  1. Use the pretrained models.
  2. Run the preprocessing command:
    python hyvideo/utils/preprocess_text_encoder_tokenizer_utils.py --input_dir ckpts/llava-llama-3-8b-v1_1-transformers --
    output_dir ckpts/text_encoder

torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1004.00 MiB. GPU 0 has a total capacity of 15.77 GiB of which 756.69 MiB is free. Including non-PyTorch memory, this process has 15.02 GiB memory in use. Of the allocated memory 14.58 GiB is allocated by PyTorch, and 144.06 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) --

@jash101
Copy link

jash101 commented Dec 17, 2024

You need a minimum of 45GB VRAM.
Maybe this fork can help: https://github.com/deepbeepmeep/HunyuanVideoGP

@kmlFaouzi
Copy link
Author

of course i am using
GPUs: 4 x NVIDIA Tesla V100
GPU Memory: 16 GiB per GPU
which is 64 VRAM

@kmlFaouzi
Copy link
Author

i tried with this link: https://github.com/deepbeepmeep/HunyuanVideoGP
and I got the same problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants