-
-
Notifications
You must be signed in to change notification settings - Fork 821
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA out of memory. Tried to allocate 14.00 MiB. GPU 0 has a total capacty of 4.00 GiB of which 0 bytes is free. Of the allocated memory 3.41 GiB is allocated by PyTorch, and 65.23 MiB is reserved by PyTorch but unallocated #72
Comments
i have enough resources still getting error |
What's VRAM? 16GB may not be enough, you have to do some optimizations. |
understood, i will return to automatic1111 itself which is allows 350x350 photo size pics rendering, |
I'm getting similar error with 12 GB 3060. I tried setting max_split_size_mb to different values to no avail. torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 11.76 GiB total capacity; 9.33 GiB already allocated; 37.69 MiB free; 9.62 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF |
Same error here! |
Is it possible to use pruned models for controlnet and IP-Adapter to reduce memory usage? |
I am also very interested to know if it is possible to reduce the amount of vram needed. By pruning, quantizing ? |
I was able to run it on 12GB 3060. Single generation runs for aprox 1min with 35-40 steps i ComfyUI. The insight model running on CPU. pipe.enable_xformers_memory_efficient_attention() on the StableDiffusionXLInstantIDPipeline And before the vae decoding: |
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.53 GiB. GPU 0 has a total capacty of 6.00 GiB of which 0 bytes is free. Of the allocated memory 12.03 GiB is allocated by PyTorch, and 355.03 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF same error, I have 6GB VRAM 3060. Will this never work on 6GB VRAM? |
Is ControlNet mandatory or it is only for the optional pose image? If it is not mandatory, could it save VRAM if ControlNet model is not loaded and applied? |
The VRAM requirements on this seem to be about 14GB. I have 12GB on my 3060 and whether or not my workflow completes is totally hit or miss. Sometimes it runs to completion (very slowly as it has been offloaded to system RAM) and other times, it blows up with an out of memory error after the next to last step hits 100% but before the final image is created. This really needs to be optimized better if at all possible. |
enabled xformers but made 0 diff and i dont see where to enable VAE |
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) i dont know what to do i tried many thing even in comfyui i am getting same issue |
forget to run it with 4 GB GPU you may run on CPU hopefully I will make a Kaggle notebook and advanced UI follow me on youtube |
The config attributes {'controlnet_list': ['controlnet', 'RPMultiControlNetModel'], 'requires_aesthetics_score': False} were passed to StableDiffusionXLInstantIDPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file.
Keyword arguments {'controlnet_list': ['controlnet', 'RPMultiControlNetModel'], 'requires_aesthetics_score': False, 'safety_checker': None} are not expected by StableDiffusionXLInstantIDPipeline and will be ignored.
Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 2.03it/s]
LCM
The config attributes {'skip_prk_steps': True} were passed to LCMScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
default: num_inference_steps=2, guidance_scale=2
Traceback (most recent call last):
File "C:\Users\gibso\pinokio\api\instantid.git\app\app.py", line 91, in
pipe.cuda()
File "C:\Users\gibso\pinokio\api\instantid.git\app\pipeline_stable_diffusion_xl_instantid.py", line 489, in cuda
self.to('cuda', dtype)
File "C:\Users\gibso\pinokio\api\instantid.git\app\env\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 869, in to
module.to(device, dtype)
File "C:\Users\gibso\pinokio\api\instantid.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 1160, in to
return self._apply(convert)
File "C:\Users\gibso\pinokio\api\instantid.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
module._apply(fn)
File "C:\Users\gibso\pinokio\api\instantid.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
module._apply(fn)
File "C:\Users\gibso\pinokio\api\instantid.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
module._apply(fn)
[Previous line repeated 7 more times]
File "C:\Users\gibso\pinokio\api\instantid.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 833, in _apply
param_applied = fn(param)
File "C:\Users\gibso\pinokio\api\instantid.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 1158, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError:. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
(env) C:\Users\gibso\pinokio\api\instantid.git\app>
i have enough resources still getting error
The text was updated successfully, but these errors were encountered: