Skip to content

Instantly share code, notes, and snippets.

@AmericanPresidentJimmyCarter
Last active August 19, 2024 07:06
Show Gist options
  • Save AmericanPresidentJimmyCarter/0134fc7848aac7025d0c967c6c4df53b to your computer and use it in GitHub Desktop.
Save AmericanPresidentJimmyCarter/0134fc7848aac7025d0c967c6c4df53b to your computer and use it in GitHub Desktop.
Putting your waifu into Flux with LoRA: welcome to losercity

Putting your waifu into Flux with LoRA: welcome to losercity

First, here is my SimpleTuner LoRA config, so you can get started with it. I used an 80GB A100 (thanks @bghira!).

LoRA repository:

I used 42 images, half of which were the subject and half of which was other random art. I used GPT4o to caption half of the images, and then wrote short captions by hand for the other half of the images.

Many people are trying to figure out flux LoRAs, and we have made some progress in the SimpleTuner community. First, when training low-rank on the dev or schnell models, you are effectively retraining the objective. The dev and schnell LoRAs will therefore have some heavy lifting to do. Not only this, but sampling is then also affected. In order to sample correctly, you will have to use classifier-free guidance (CFG).

Because the SimpleTuner validation loop sampled in the default way (without CFG), it appeared as if LoRA training was not working. Nonetheless, it was! The model was both learning how to perform CFG and how to make your subject.

So, I wrote a new pipeline and code to use CFG with our LoRA'd model.

These are two images, one with non-CFG sampling and one with CFG sampling, both with the LoRA applied.

Appendix A

Oh no! The non-CFG image is a mess, and the CFG image is really blurry. How can we fix this? Just turn off classifier free guidance in the beginning. Below is an image with CFG iteratively turned off for increasing timesteps.

Appendix B

Now we can see that we should be turning off CFG for at least the first step, if not further on.

Finally, let's sample with our LoRA versus without:

Appendix C

Brilliant! It might still be a bit undertrained, but the influence of the LoRA is clear.

Thanks to @bghira for an A100 to train on, and everyone on the Terminus Discord server for helping figure this one out!

August 11th Update

I'm told that if you set the fake (distilled) classifier free guidance value to be 1.0 during training, you can still preserve the original distillation and don't need to use CFG! Setting it to a non-1.0 value will cause the model to unlearn CFG distillation and re-introduce CFG, which might be beneficial in that it learns to make more diverse outputs, but is less efficient to inference.

I have started a new run here with the following config:

This sets --flux_guidance_value=1.0 which fixes the distilled guidance value to 1.0 for the whole run, then sets export VALIDATION_GUIDANCE_REAL=1.0 to disable CFG sampling.

The new run seemed to converge after just 3,000 steps!

I am going to put this down as a success!

@AmericanPresidentJimmyCarter
Copy link
Author

AmericanPresidentJimmyCarter commented Aug 10, 2024

Appendix A

loona_blurred

Appendix B

concatenated_image

@AmericanPresidentJimmyCarter
Copy link
Author

AmericanPresidentJimmyCarter commented Aug 10, 2024

Appendix C

concatenated_image2

@sayakpaul
Copy link

Thanks for the write-up!

@AmericanPresidentJimmyCarter
Copy link
Author

test_flux_loona_grid_next_lora_sm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment