Skip to content

A RAG Chatbot with Next.js, Together.ai and Llama Index

Notifications You must be signed in to change notification settings

homeidjsc/llamaindex-chatbot

 
 

Repository files navigation

Open source AI RAG Chatbot

This is a LlamaIndex and Together.ai RAG chatbot using Next.js bootstrapped with create-llama.

It's powered by Llama Index, Mixtral (through Together AI Inference) and Together Embeddings. It'll embed the PDF file in data, generate embeddings stored locally, then give you a RAG chatbot to ask questions to.

Getting Started

Copy your .example.env file into a .env and replace the TOGETHER_API_KEY with your API key from together.ai.

  1. Install the dependencies.
npm install
  1. Generate the embeddings and store them locally in the cache folder. You can also provide a PDF in the data folder instead of the default one.
npm run generate
  1. Run the app and send messages to your chatbot. It will use context from the embeddings to answer questions.
npm run dev

Common Issues

  • Ensure your environment file is called .env
  • Specify a dummy OPENAI_API_KEY value in this .env to make sure it works (temporary hack, Llama index is patching this)

Learn More

To learn more about LlamaIndex and Together AI, take a look at the following resources:

About

A RAG Chatbot with Next.js, Together.ai and Llama Index

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 88.6%
  • CSS 5.7%
  • JavaScript 5.7%