This table provides optimal training settings for LoRA fine-tuning based on the number of training images.
# of Images | Network Rank (network_dim ) |
Repeats | Epochs | Batch Size | Network Alpha (alpha ) |
---|---|---|---|---|---|
≤ 50 | 8~16 |
7~10 |
15~20 |
2~4 |
Rank / 2 (e.g., 16/8 ) |
50~100 | 16 |
6~8 |
12~18 |
4 |
Rank / 2 (e.g., 16/8 ) |
100~200 | 16~32 |
5~7 |
12~15 |
4~8 |
Rank / 2 (e.g., 16/8 or 32/16 ) |
200~300 | 16~32 |
5 |
10~12 |
4~8 |
Rank / 2 (e.g., 16/8 or 32/16 ) |
300~500 | 32 |
4~5 |
8~12 |
8 |
Rank / 2 (e.g., 32/16 ) |
500+ | 32~64 |
3~4 |
6~10 |
8~16 |
Rank / 2 (e.g., 32/16 or 64/32 ) |
network_dim
(Network Rank) → Higher values capture more detail but may overfit.Repeats
→ More repeats are needed for smaller datasets to reinforce learning.Epochs
→ Smaller datasets need more epochs, larger datasets need fewer to prevent overfitting.Batch Size
→ Lower for small datasets, higher for large datasets for faster training.Network Alpha (alpha)
→Rank / 2
keeps the LoRA balanced, preventing excessive influence.
- If you have 50-100 images, use:
network_dim
= 16Repeats
= 6~8Epochs
= 12~18Batch Size
= 4
- If you have 200-300 images, use:
network_dim
= 16~32Repeats
= 5Epochs
= 10~12Batch Size
= 4~8
- If you have 500+ images, use:
network_dim
= 32~64Repeats
= 3~4Epochs
= 6~10Batch Size
= 8~16
- These values are a guideline. If results aren’t optimal, adjust accordingly.
- If the LoRA is too weak, increase Repeats or Epochs.
- If the LoRA overfits (generates only training images), reduce Repeats or increase dataset size.
- Testing different Network Ranks (
network_dim
) can improve results based on dataset diversity.