For something like https://www.instagram.com/holosomnia/
-
768 x 1024
-
High guidance scale 18k and above,
-
200-250 steps,
-
4000+ tv scale,
-
lower range scale to 80,
-
higher sat scale 2000+,
-
no secondary model (important)
-
Add ViT14 or if have ram, 336 (very nice)
-
bump eta to 0.9 (important)
-
cut ic pow to 10
-
split the cut schedule into 200s and do like 10/8/6/2/0 for overview and the opposite for the innercut, and just play with the numbers a bunch
-
prompts: always add beeple for blur, orbs and color. to remove orbs, do "globe:-1". To reduce blur, do "dof:-1"
-
kinkade for color.
-
Try various "color-heavy" artists.
-
some artists dont' do much, and some really change the result
-
Use prompts that make sense for the artists
-
Add prompts and weights to remove aspects as you iterate
-
Just try a LOT of variations
-
ALWAYS use partial saves and take the 90%-ish partial save as your final and ALWAYS run it through Real-ESRGAN Inference Demo.ipynb to upscale and make it crisp
Example prompt:
[
"A beautiful ultradetailed anime illustration of a city street by beeple, makoto shinkai, and thomas kinkade, anime art wallpaper 4k, trending on artstation:3",
"anime",
"car:-1",
"dof:-1",
"blur:-1"
]
I get
RuntimeError: CUDA out of memory. Tried to allocate 1.41 GiB (GPU 0; 15.90 GiB total capacity; 12.12 GiB already allocated; 993.75 MiB free; 14.02 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
each time I uncheck 'use secondary model`
How do you avoid that?
I'm pro+...