Skip to content

Instantly share code, notes, and snippets.

@appliedintelligencelab
Created August 22, 2024 18:20
Show Gist options
  • Save appliedintelligencelab/4ebf3c1beb0eff6c5238914d6e17bfce to your computer and use it in GitHub Desktop.
Save appliedintelligencelab/4ebf3c1beb0eff6c5238914d6e17bfce to your computer and use it in GitHub Desktop.
Kohya SS' Flux training script
accelerate launch --mixed_precision bf16 --num_cpu_threads_per_process 1 flux_train_network.py --pretrained_model_name_or_path /home/hal/assets/models/unet/flux1-dev.sft --clip_l /home/hal/assets/models/clip/clip_l.safetensors --t5xxl /home/hal/assets/models/clip/t5xxl_fp16.safetensors --ae /home/hal/assets/models/vae/ae.sft --cache_latents_to_disk --save_model_as safetensors --sdpa --persistent_data_loader_workers --max_data_loader_n_workers 2 --seed 42 --gradient_checkpointing --mixed_precision bf16 --save_precision bf16 --network_module networks.lora_flux --network_dim 4 --optimizer_type adamw8bit --learning_rate 1e-4 --network_train_unet_only --cache_text_encoder_outputs --cache_text_encoder_outputs_to_disk --fp8_base --highvram --max_train_epochs 16 --save_every_n_epochs 4 --dataset_config dataset.toml --output_dir /home/hal/assets/training-output/hnia --output_name flux-lora-hnia --timestep_sampling sigmoid --model_prediction_type raw --guidance_scale 1.0 --loss_type l2 --optimizer_type adafactor --optimizer_args "relative_step=False" "scale_parameter=False" "warmup_init=False"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment