Given a user query, the app conducts a web search, downloads the top N resulting web pages, then analyzes those pages with an LLM.
The LLM can be any smaller, consumer-grade with at least 5k context window (assuming each web page ~1k tokens).
-
cd backend copy config.json.sample config.json
-
In
config.json
, fill inGOOGLE_SEARCH_API_KEY
andGOOGLE_SEARCH_ENGINE_ID
credentials from Google Custom Search API. -
Fill in
GROQ_API_KEY
credentials from Groq. -
Setup virtual environment, packages, and deploy the server
virtualenv venv . venv/bin/activate pip install -r requirements.txt python app.py
This is fine for dev testing.
In production, in addition, you probably want to use gunicorn (1, 2) and nginx (1) in conjunction with your python server (1) (utility scripts linked).
-
cd frontend
- Update
API_URL
inconstants.js
to point to your server -
npm install npm run build
- In dev testing, to start the server:
In production, to start the server:
npm run start
npm i -g npm@latest rm -rf node_modules rm -rf package-lock.json npm cache clean --force npm i --no-optional --omit=optional npm run build npm install -g serve server -s build