Skip to content

Latest commit

 

History

History
73 lines (46 loc) · 4.28 KB

README.md

File metadata and controls

73 lines (46 loc) · 4.28 KB

Run LLM models in colab using TextGen-webui :

This repository contains a Colab notebook that allows you to run Large Language Models (LLM) models with just one click.

Available notebooks :

-Run gguf LLM models in TextGen-webui : Open In Colab

-Run GPTQ and Exl2 LLM models in TextGen-webui : Open In Colab

Quantized models Sources :

check those 🤗 huggingface repos :

Good models to try :

you can try these :

Some Tips

in free colab gpu T4 (15G vram) you can use :

  • 22b model quantized upto Q3_K_M(context up to 8K)
  • 12b model quantized upto Q5_K_M(context up to 16K)
  • 8b/7b model quantized upto Q8_0(context up to 16k if the model support it)
  • 7b/8b model exl2 quantized 6bpw (context up to 16k if the model support it)
  • 12b model exl2 quantized 4bpw

most older models goes with 8k context length if u want to use longer context u need to make sure the models support longer context.

if you want to run model higher then 20B (such as 20B,4x7b..) on colab you may need to reduce the offloaded gpu models to split the ram usage between gpu vram and system ram. (slower but it works 😉)

if you dont have quantized version , you can use full precision 7b modeles with gptq notebook but make sure to use flags --load-in-4bit or --load-in-8bit its slower then quantized versions but works well,so if u have quantized verions it will be better.

in case of exl2 you can use --cache_4bit to save up some vram. if you want a creative answers increase the temp(0.9 ~ 1.25) and decrease the minp(0.05~0.1) if you want a strict and accurate answers decrease the temp(0.3~0.5) and increase the min p (0.15~0.25)

Getting Started

To get started with the LLM Model Runner, follow these steps:

  1. Open the Colab notebook in Google Colab by clicking on the "Open in Colab" button at the top of the notebook. Open In Colab

  2. Choose The model that you want from the list .

image

3.Choose quantization type:

image

  1. Run the Cell and Visit the Generated link( https://***.gradio.live ) and start your Conversation with your favorite model !

Requirements

  • no Requirement just open Colab in Gpu mode

All the necessary dependencies will be automatically installed when you run the Colab notebook.

Thanks <3