DiscordLLM is a powerful Discord bot that brings the capabilities of large language models right to your server, running entirely on your local machine. No cloud services, no API costs – just pure, home-brewed local AI at your fingertips.
- Run state-of-the-art language models locally
- Interact with AI using simple Discord slash commands
- Zero cloud dependency, complete privacy
- Customizable and expandable
- Supports multiple LLM models (currently showcasing Qwen2 1.5B)
- Python 3.8 or higher
- Discord Developer Account
- Ollama installed on your local machine
-
Clone the repository:
git clone https://github.com/boshyxd/DiscordLLM.git cd DiscordLLM
-
Install required dependencies:
pip install -r requirements.txt
-
Set up your Discord bot:
- Create a new application in the Discord Developer Portal
- Add a bot to your application
- Copy the bot token
-
Configure the bot:
- Replace
YOUR_BOT_TOKEN
with your actual bot token
- Replace
-
Install and run Ollama:
ollama run qwen2:1.5b
-
Start the bot:
python bot.py
Once the bot is running and invited to your server, you can interact with it using slash commands:
/ask <your question>
: Ask the AI a question
Example:
/ask What is the capital of France?
Currently, DiscordLLM is configured to use the Qwen2 1.5B model, but it can be easily adapted to use any model supported by Ollama. To use a different model:
- Download the model using Ollama:
ollama pull model_name
- Update the
default_model
inconfig.json
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the MIT License. See LICENSE
for more information.
Angus Bailey - [email protected]
Project Link: https://github.com/boshyxd/DiscordLLM