Skip to content

joshualiao/alignment-lora

Repository files navigation

Distributional Alignment with LoRA to Larger Models

CS182 {joshua.liao, bplate, carolinewu01, patrickgu}@berkeley.edu

Code adapted from https://github.com/cloneofsimo/lora.

For results and analysis, please read pdf file "Project Report"

Installation

pip install git+https://github.com/cloneofsimo/lora.git

Training

export MODEL_NAME="stable-diffusion-v1-5"
export INSTANCE_DIR="./data/real_images"
export OUTPUT_DIR="./output/model"

lora_pti \
  --pretrained_model_name_or_path=$MODEL_NAME  \
  --instance_data_dir=$INSTANCE_DIR \
  --output_dir=$OUTPUT_DIR \
  --train_text_encoder \
  --resolution=512 \
  --train_batch_size=1 \
  --gradient_accumulation_steps=4 \
  --scale_lr \
  --learning_rate_unet=1e-4 \
  --learning_rate_text=1e-5 \
  --learning_rate_ti=5e-4 \
  --color_jitter \
  --lr_scheduler="linear" \
  --lr_warmup_steps=0 \
  --placeholder_tokens="<s1>|<s2>" \
  --use_template="style"\
  --save_steps=100 \
  --max_train_steps_ti=1000 \
  --max_train_steps_tuning=1000 \
  --perform_inversion=True \
  --clip_ti_decay \
  --weight_decay_ti=0.000 \
  --weight_decay_lora=0.001\
  --continue_inversion \
  --continue_inversion_lr=1e-4 \
  --device="cuda:0" \
  --lora_rank=1 \

Training data can be found in the real_images folder. fake_images are generated by the base stable-diffusion model

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •