Skip to content

Latest commit

 

History

History
44 lines (31 loc) · 1.37 KB

README.md

File metadata and controls

44 lines (31 loc) · 1.37 KB

A very CPU-friendly RAG implementation

A simple RAG tool for asking things related to a pdf file (in this project, it's about Don Quixote)

This project demonstrates how things can be implemented with CPU-friendly database and models. It uses

  • FAISS for vector database
  • WordLlama for text embedding model
  • supposedly Local SmolLMs for language model but I'm too lazy to setup on my Windows 10. Therefore, I use OpenAI API instead. I know it's possible!

Setup

Python 3.10

Install required packages, please see requirements.txt for extra information

pip install -r requirements.txt

OpenAI API key is stored at .env

OPENAI_API_KEY=<key_here>

Run

It is strongly advised to reach each .py file before running any command. By doing so, you get to understand the project more.

At root project, to setup the vector database

python setup_db.py

At root project, to query something then have a language model answer

python rag.py

Great researcher/developer-friendly RAG frameworks