Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA out of memory, when it shouldn't be. #53

Open
WarmCatUK opened this issue Jan 3, 2022 · 2 comments
Open

CUDA out of memory, when it shouldn't be. #53

WarmCatUK opened this issue Jan 3, 2022 · 2 comments

Comments

@WarmCatUK
Copy link

Apologies for rubbish formatting, but Github's code insert is broken.

Using Pixray Pixeldraw, with the settings as they come, aside from:
aspect: square
drawer: vqgan

`
/content/pixray/vqgan.py in load_model(self, settings, device)
156 self.e_dim = model.quantize.e_dim
157 self.n_toks = model.quantize.n_e
--> 158 self.z_min = model.quantize.embedding.weight.min(dim=0).values[None, :, None, None]
159 self.z_max = model.quantize.embedding.weight.max(dim=0).values[None, :, None, None]
160

RuntimeError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 15.90 GiB total capacity; 14.78 GiB already allocated; 15.75 MiB free; 14.88 GiB reserved in total by PyTorch)
`

Using GPU on Colab Pro (again github code insert doesn't work):
+-----------------------------------------------------------------------------+ | NVIDIA-SMI 495.44 Driver Version: 460.32.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla P100-PCIE... Off | 00000000:00:04.0 Off | 0 | | N/A 42C P0 32W / 250W | 16265MiB / 16280MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+

@nkv123
Copy link

nkv123 commented Jul 25, 2022

From what I see
allocate 128.00 MiB can not be taken from free space is 15.75 MiB. so the "RuntimeError: CUDA out of memory."
since 128>15.75.
PyTorch wants to take 14.88 GiB.
Some user programs like web browser and graphical environment do take some of GPU memory.
All programs on your PC want to take have taken 14.78 GiB/15.90 GiB.
I do not know how but try using a smaller model size.
Alternative is to go with CPU and RAM memory if its bigger then GPU memory.

@nkv123
Copy link

nkv123 commented Jul 25, 2022

you could try increasing cuts on my PC that is running via CPU AND RAM. RAM keeps varying between 8 to 18 GB of RAM usage for command line cog run python pixray.py --drawer=pixel --prompt=sunrise -cuts=50 -bats=1 --outdir sunrise04
Also want you to notice that one iteration has increased from 4s/it to 20s/it. result its "its working barely"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants