This table provides optimal training settings for LoRA fine-tuning based on the number of training images.
# of Images | Network Rank (network_dim ) |
Repeats | Epochs | Batch Size | Network Alpha (alpha ) |
---|---|---|---|---|---|
≤ 50 | 8~16 |
7~10 |
15~20 |
2~4 |
Rank / 2 (e.g., 16/8 ) |
50~100 | 16 |
6~8 |
12~18 |
4 |
Rank / 2 (e.g., 16/8 ) |