-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to load model #135
Comments
May I ask what is the memory capacity of your machine? |
i too getting same issue and i have 55gb ram and 3060 gpu |
Hello @bubbliiiing Please find the info 8GB VRAM + 8GB Shared I tried all 3 options for GPU_memory_mode Please let me know if you want any other info |
8GB of GPU memory should not be enough, but the failure to load the model seems to be due to insufficient memory. |
I have 3090 and 32GB RAM (and about 128GB cache on SSD) and I experience the same on Ubuntu + the script logs me out from the system, ether locally or through ssh connection - I am trying i2v model |
|
Currently, you may need some swap memory. We are currently trying to develop a smaller model to meet the low memory limit (30GB)
|
For what it's worth, I had to increase my WSL2 to 48GB ram (8GB swap) to load the models. 32GB wasn't enough. |
Thank you for your test. |
Are you able to load the model so it can use more than one GPU? Might it get stuck in just one of the GPU's VRAM (24GB perhaps) ? |
I appreciate and thank you for your help. I've added 64GB of RAM, and now I can load the model without issues. After loading, the model starts using GPU memory, but only 1 card is in use. Do we have any approaches to distribute the GPU load across multiple cards? |
Thank you, looking forward. |
I'm having the same issue on a 1 x RTX 6000 Ada 16 vCPU 188 GB RAM Runpod. Any solutions yet? |
I installed all dependencies and after launching WebUI I select Model checkpoints (model path).
I have downloaded t2v model on local
It starts to load model but after some time it just crashes without any error. Any solution
Tried 11.8 and 12.1
The text was updated successfully, but these errors were encountered: