-
Notifications
You must be signed in to change notification settings - Fork 10.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How harsh is 10GB VRAM requirement? #38
Comments
I managed with a 256 x 512 resolution. just type -W 256 -H 512 or vice versa. I have a rtx 2060 super |
This fork requires a lot less VRAM according to most Reddit comments |
Will try new keys and forks. Meanwhile, default, on 3080 10GB, shows off with this: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.00 GiB (GPU 0; 9.78 GiB total capacity; 5.62 GiB already allocated; 2.25 GiB free; 5.74 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF I have installed nightly torch, because stable only has sm_70, not suitable for Amperes |
resolution decrease helped. 10GB now does its thing, thanks |
I've been running 512x512 just fine on 1.4 on my 6gb rtx 2060 in a laptop to animate prompt walk morphs. pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, use_auth_token=True) Also had to make sure the rest of the desktop was on Integrated instead of taking up GPU vram to save enough space to get it running. To do this, Noting that for 1.3 I was using the basujindal fork of the txt2img and img2img and those were working fine too, that fork pulls a lot of VRAM tricks I think to get it in under 4GB but with speed tradeoffs. 1.4 probably works fine with that fork too just haven't tried it yet |
or maybe just use this https://huggingface.co/spaces/stabilityai/stable-diffusion |
Inpainting 1.0
…ez/font-linux 💯 💯 Fix font loading on linux
add web demo link on Hugging Face Spaces
Im currently generating up to 640x960, 832x704 (around 600k pixels) with a 2gb gtx 1050 using --lowvram. Its pretty slow but its about 3 times faster than just using the cpu, and impressive that it even runs with 2gb vram |
I have 6GB VRAM, and here is what I got running txt2img:
RuntimeError: CUDA out of memory. Tried to allocate 1.50 GiB (GPU 0; 5.80 GiB total capacity; 4.12 GiB already allocated; 682.94 MiB free; 4.24 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
with only options --prompt "tower" and --plms
Maybe I can reduce quality or something?
The text was updated successfully, but these errors were encountered: