Skip to content

Latest commit

 

History

History
607 lines (416 loc) · 23.7 KB

README.md

File metadata and controls

607 lines (416 loc) · 23.7 KB

⚡ LitGPT

Pretrain, finetune, evaluate, and deploy 20+ LLMs on your own data

Uses the latest state-of-the-art techniques:

✅ flash attention     ✅ fp4/8/16/32     ✅ LoRA, QLoRA, Adapter (v1, v2)     ✅ FSDP     ✅ 1-1000+ GPUs/TPUs


PyPI - Python Version cpu-tests license Discord

Lightning AIModelsQuick startInferenceFinetunePretrainDeployFeaturesTraining recipes (YAML)

  LitGPT steps  

Finetune, pretrain and deploy LLMs Lightning fast ⚡⚡

LitGPT is a command-line tool designed to easily finetune, pretrain, evaluate, and deploy 20+ LLMs on your own data. It features highly-optimized training recipes for the world's most powerful open-source large language models (LLMs).

We reimplemented all model architectures and training recipes from scratch for 4 reasons:

  1. Remove all abstraction layers and have single file implementations.
  2. Guarantee Apache 2.0 compliance to enable enterprise use without limits.
  3. Optimized each model's architectural detail to maximize performance, reduce costs, and speed up training.
  4. Highly-optimized recipe configs we have tested at enterprise scale.

 

Choose from 20+ LLMs

LitGPT has 🤯 custom, from-scratch implementations of 20+ LLMs without layers of abstraction:

Model Model size Author Reference
Llama 3 8B, 70B Meta AI Meta AI 2024
Llama 2 7B, 13B, 70B Meta AI Touvron et al. 2023
Code Llama 7B, 13B, 34B, 70B Meta AI Rozière et al. 2023
Mixtral MoE 8x7B Mistral AI Mistral AI 2023
Mistral 7B Mistral AI Mistral AI 2023
CodeGemma 7B Google Google Team, Google Deepmind
... ... ... ...
See full list of 20+ LLMs

 

All models

Model Model size Author Reference
CodeGemma 7B Google Google Team, Google Deepmind
Code Llama 7B, 13B, 34B, 70B Meta AI Rozière et al. 2023
Danube2 1.8B H2O.ai H2O.ai
Dolly 3B, 7B, 12B Databricks Conover et al. 2023
Falcon 7B, 40B, 180B TII UAE TII 2023
FreeWilly2 (Stable Beluga 2) 70B Stability AI Stability AI 2023
Function Calling Llama 2 7B Trelis Trelis et al. 2023
Gemma 2B, 7B Google Google Team, Google Deepmind
Llama 2 7B, 13B, 70B Meta AI Touvron et al. 2023
Llama 3 8B, 70B Meta AI Meta AI 2024
LongChat 7B, 13B LMSYS LongChat Team 2023
Mixtral MoE 8x7B Mistral AI Mistral AI 2023
Mistral 7B Mistral AI Mistral AI 2023
Nous-Hermes 7B, 13B, 70B NousResearch Org page
OpenLLaMA 3B, 7B, 13B OpenLM Research Geng & Liu 2023
Phi 1.3B, 2.7B Microsoft Research Li et al. 2023
Platypus 7B, 13B, 70B Lee et al. Lee, Hunter, and Ruiz 2023
Pythia {14,31,70,160,410}M, {1,1.4,2.8,6.9,12}B EleutherAI Biderman et al. 2023
RedPajama-INCITE 3B, 7B Together Together 2023
StableCode 3B Stability AI Stability AI 2023
StableLM 3B, 7B Stability AI Stability AI 2023
StableLM Zephyr 3B Stability AI Stability AI 2023
TinyLlama 1.1B Zhang et al. Zhang et al. 2023
Vicuna 7B, 13B, 33B LMSYS Li et al. 2023

 

Install LitGPT

Install LitGPT with all dependencies (including CLI, quantization, tokenizers for all models, etc.):

pip install 'litgpt[all]'
Advanced install options

 

Install from source:

git clone https://github.com/Lightning-AI/litgpt
cd litgpt
pip install -e '.[all]'

 

Quick start

After installing LitGPT, select the model and action you want to take on that model (finetune, pretrain, evaluate, deploy, etc...):

# ligpt [action] [model]
litgpt  download  meta-llama/Meta-Llama-3-8B-Instruct
litgpt  chat      meta-llama/Meta-Llama-3-8B-Instruct
litgpt  finetune  meta-llama/Meta-Llama-3-8B-Instruct
litgpt  pretrain  meta-llama/Meta-Llama-3-8B-Instruct
litgpt  serve     meta-llama/Meta-Llama-3-8B-Instruct

 

Use an LLM for inference

Use LLMs for inference to test its chatting capabilities, run evaluations, or extract embeddings, etc... Here's an example showing how to use the Phi-2 LLM.

Open In Studio

 

# 1) Download a pretrained model
litgpt download --repo_id microsoft/phi-2

# 2) Chat with the model
litgpt chat \
  --checkpoint_dir checkpoints/microsoft/phi-2

>> Prompt: What do Llamas eat?

The download of certain models requires an additional access token. You can read more about this in the download documentation. For more information on the different inference options, refer to the inference tutorial.

 

Finetune an LLM

Finetune a model to specialize it on your own custom dataset:

Open In Studio

 

# 1) Download a pretrained model
litgpt download --repo_id microsoft/phi-2

# 2) Finetune the model
curl -L https://huggingface.co/datasets/ksaw008/finance_alpaca/resolve/main/finance_alpaca.json -o my_custom_dataset.json

litgpt finetune \
  --checkpoint_dir checkpoints/microsoft/phi-2 \
  --data JSON \
  --data.json_path my_custom_dataset.json \
  --data.val_split_fraction 0.1 \
  --out_dir out/custom-model

# 3) Chat with the model
litgpt chat \
  --checkpoint_dir out/custom-model/final

 

Pretrain an LLM

Train an LLM from scratch on your own data via pretraining:

Open In Studio

 

mkdir -p custom_texts
curl https://www.gutenberg.org/cache/epub/24440/pg24440.txt --output custom_texts/book1.txt
curl https://www.gutenberg.org/cache/epub/26393/pg26393.txt --output custom_texts/book2.txt

# 1) Download a tokenizer
litgpt download \
  --repo_id EleutherAI/pythia-160m \
  --tokenizer_only True

# 2) Pretrain the model
litgpt pretrain \
  --model_name pythia-160m \
  --tokenizer_dir checkpoints/EleutherAI/pythia-160m \
  --data TextFiles \
  --data.train_data_path "custom_texts/" \
  --train.max_tokens 10_000_000 \
  --out_dir out/custom-model

# 3) Chat with the model
litgpt chat \
  --checkpoint_dir out/custom-model/final

 

Continue pretraining an LLM

This is another way of finetuning that specializes an already pretrained model by training on custom data:

Open In Studio

 

mkdir -p custom_texts
curl https://www.gutenberg.org/cache/epub/24440/pg24440.txt --output custom_texts/book1.txt
curl https://www.gutenberg.org/cache/epub/26393/pg26393.txt --output custom_texts/book2.txt

# 1) Download a pretrained model
litgpt download --repo_id EleutherAI/pythia-160m

# 2) Continue pretraining the model
litgpt pretrain \
  --model_name pythia-160m \
  --tokenizer_dir checkpoints/EleutherAI/pythia-160m \
  --initial_checkpoint_dir checkpoints/EleutherAI/pythia-160m \
  --data TextFiles \
  --data.train_data_path "custom_texts/" \
  --train.max_tokens 10_000_000 \
  --out_dir out/custom-model

# 3) Chat with the model
litgpt chat \
  --checkpoint_dir out/custom-model/final

 

Deploy an LLM

Once you're ready to deploy a finetuned LLM, run this command:

Open In Studio

 

# locate the checkpoint to your finetuned or pretrained model and call the `serve` command:
litgpt serve --checkpoint_dir path/to/your/checkpoint/microsoft/phi-2

# Alternative: if you haven't finetuned, download any checkpoint to deploy it:
litgpt download --repo_id microsoft/phi-2
litgpt serve --checkpoint_dir checkpoints/microsoft/phi-2

Test the server in a separate terminal and integrate the model API into your AI product:

# 3) Use the server (in a separate session)
import requests, json
 response = requests.post(
     "http://127.0.0.1:8000/predict",
     json={"prompt": "Fix typos in the following sentence: Exampel input"}
)
print(response.json()["output"])

 

 


State-of-the-art features

✅  State-of-the-art optimizations: Flash Attention v2, multi-GPU support via fully-sharded data parallelism, optional CPU offloading, and TPU and XLA support.

✅  Pretrain, finetune, and deploy

✅  Reduce compute requirements with low-precision settings: FP16, BF16, and FP16/FP32 mixed.

✅  Lower memory requirements with quantization: 4-bit floats, 8-bit integers, and double quantization.

✅  Configuration files for great out-of-the-box performance.

✅  Parameter-efficient finetuning: LoRA, QLoRA, Adapter, and Adapter v2.

✅  Exporting to other popular model weight formats.

✅  Many popular datasets for pretraining and finetuning, and support for custom datasets.

✅  Readable and easy-to-modify code to experiment with the latest research ideas.

 


Training recipes

LitGPT comes with validated recipes (YAML configs) to train models under different conditions. We've generated these recipes based on the parameters we found to perform the best for different training conditions.

Browse all training recipes here.

Example

litgpt finetune \
  --config https://raw.githubusercontent.com/Lightning-AI/litgpt/main/config_hub/finetune/llama-2-7b/lora.yaml

What is a config

Configs let you customize training for all granular parameters like:

# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
checkpoint_dir: checkpoints/meta-llama/Llama-2-7b-hf

# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
out_dir: out/finetune/qlora-llama2-7b

# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
precision: bf16-true

...
Example: LoRA finetuning config

 

# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
checkpoint_dir: checkpoints/meta-llama/Llama-2-7b-hf

# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
out_dir: out/finetune/qlora-llama2-7b

# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
precision: bf16-true

# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
quantize: bnb.nf4

# How many devices/GPUs to use. (type: Union[int, str], default: 1)
devices: 1

# The LoRA rank. (type: int, default: 8)
lora_r: 32

# The LoRA alpha. (type: int, default: 16)
lora_alpha: 16

# The LoRA dropout value. (type: float, default: 0.05)
lora_dropout: 0.05

# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
lora_query: true

# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
lora_key: false

# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
lora_value: true

# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
lora_projection: false

# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
lora_mlp: false

# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
lora_head: false

# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
data:
  class_path: litgpt.data.Alpaca2k
  init_args:
    mask_prompt: false
    val_split_fraction: 0.05
    prompt_style: alpaca
    ignore_index: -100
    seed: 42
    num_workers: 4
    download_dir: data/alpaca2k

# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
train:

  # Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
  save_interval: 200

  # Number of iterations between logging calls (type: int, default: 1)
  log_interval: 1

  # Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
  global_batch_size: 8

  # Number of samples per data-parallel rank (type: int, default: 4)
  micro_batch_size: 2

  # Number of iterations with learning rate warmup active (type: int, default: 100)
  lr_warmup_steps: 10

  # Number of epochs to train on (type: Optional[int], default: 5)
  epochs: 4

  # Total number of tokens to train on (type: Optional[int], default: null)
  max_tokens:

  # Limits the number of optimizer steps to run (type: Optional[int], default: null)
  max_steps:

  # Limits the length of samples (type: Optional[int], default: null)
  max_seq_length: 512

  # Whether to tie the embedding weights with the language modeling head weights (type: Optional[bool], default: null)
  tie_embeddings:

  #   (type: float, default: 0.0003)
  learning_rate: 0.0002

  #   (type: float, default: 0.02)
  weight_decay: 0.0

  #   (type: float, default: 0.9)
  beta1: 0.9

  #   (type: float, default: 0.95)
  beta2: 0.95

  #   (type: Optional[float], default: null)
  max_norm:

  #   (type: float, default: 6e-05)
  min_lr: 6.0e-05

# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
eval:

  # Number of optimizer steps between evaluation calls (type: int, default: 100)
  interval: 100

  # Number of tokens to generate (type: Optional[int], default: 100)
  max_new_tokens: 100

  # Number of iterations (type: int, default: 100)
  max_iters: 100

# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
logger_name: csv

# The random seed to use for reproducibility. (type: int, default: 1337)
seed: 1337

Override config params via CLI

Override any parameter in the CLI:

litgpt finetune \
  --config https://raw.githubusercontent.com/Lightning-AI/litgpt/main/config_hub/finetune/llama-2-7b/lora.yaml \
  --lora_r 4

 

Community

Get involved!

We appreciate your feedback and contributions. If you have feature requests, questions, or want to contribute code or config files, please don't hesitate to use the GitHub Issue tracker.

We welcome all individual contributors, regardless of their level of experience or hardware. Your contributions are valuable, and we are excited to see what you can accomplish in this collaborative and supportive environment.

 

Tip

Unsure about contributing? Check out our How to Contribute to LitGPT guide.

If you have general questions about building with LitGPT, please join our Discord.

 

Tutorials, how-to guides, and docs

Note

We recommend starting with the Zero to LitGPT: Getting Started with Pretraining, Finetuning, and Using LLMs if you are looking to get started with using LitGPT.

Tutorials and in-depth feature documentation can be found below:

 

XLA

Lightning AI has partnered with Google to add first-class support for Cloud TPUs in Lightning's frameworks and LitGPT, helping democratize AI for millions of developers and researchers worldwide.

Using TPUs with Lightning is as straightforward as changing one line of code.

We provide scripts fully optimized for TPUs in the XLA directory.

 

Acknowledgements

This implementation extends on Lit-LLaMA and nanoGPT, and it's powered by Lightning Fabric.

 

Community showcase

Check out the projects below that use and build on LitGPT. If you have a project you'd like to add to this section, please don't hesitate to open a pull request.

 

🏆 NeurIPS 2023 Large Language Model Efficiency Challenge: 1 LLM + 1 GPU + 1 Day

The LitGPT repository was the official starter kit for the NeurIPS 2023 LLM Efficiency Challenge, which is a competition focused on finetuning an existing non-instruction tuned LLM for 24 hours on a single GPU.

 

🦙 TinyLlama: An Open-Source Small Language Model

LitGPT powered the TinyLlama project and TinyLlama: An Open-Source Small Language Model research paper.

 

🍪 MicroLlama: MicroLlama-300M

MicroLlama is a 300M Llama model pretrained on 50B tokens powered by TinyLlama and LitGPT.

 

🔬 Pre-training Small Base LMs with Fewer Tokens

The research paper "Pre-training Small Base LMs with Fewer Tokens", which utilizes LitGPT, develops smaller base language models by inheriting a few transformer blocks from larger models and training on a tiny fraction of the data used by the larger models. It demonstrates that these smaller models can perform comparably to larger models despite using significantly less training data and resources.

 

Citation

If you use LitGPT in your research, please cite the following work:

@misc{litgpt-2023,
  author       = {Lightning AI},
  title        = {LitGPT},
  howpublished = {\url{https://github.com/Lightning-AI/litgpt}},
  year         = {2023},
}

 

License

LitGPT is released under the Apache 2.0 license.