Skip to content

πŸ”„ Sophisticated multi-agent LLM system orchestrating specialized AI agents for high-accuracy processing. Integrates Interpreter, Reasoner, Generator, and Critic agents using Gemma, Mistral and Llama models.

License

Notifications You must be signed in to change notification settings

KazKozDev/murmur

Repository files navigation

Murmur

Murmur

License: MIT Python 3.7+ GitHub issues GitHub stars GitHub forks GitHub pull requests Code style: black aiohttp

A sophisticated multi-agent system that orchestrates different specialized AI agents using local LLM models to process and respond to user queries. The system implements a pipeline of Interpreter, Reasoner, Generator, and Critic agents to provide well-thought-out and refined responses.

🌟 Features

  • Multi-Agent Architecture: Four specialized agents working in concert:

    • Interpreter: Analyzes user intent and context
    • Reasoner: Develops logical approach to the problem
    • Generator: Creates initial responses
    • Critic: Reviews and refines generated content
  • Local LLM Integration: Works with locally hosted language models through a REST API

  • Asynchronous Processing: Built with asyncio for efficient concurrent operations

  • Robust Error Handling: Comprehensive error management with retries and graceful fallbacks

  • Conversation Memory: Maintains context through conversation history

  • Confidence Scoring: Evaluates response quality with multiple metrics

πŸ”§ Prerequisites

  • Python 3.7+
  • Local LLM server (compatible with Ollama API)
  • Required Python packages:
    aiohttp
    asyncio
    

πŸš€ Installation

  1. Clone the repository:
git clone https://github.com/KazKozDev/murmur.git
cd murmur
  1. Install dependencies:
pip install -r requirements.txt
  1. Ensure your local LLM server is running (default: http://localhost:11434)

πŸ’» Usage

  1. Navigate to the project directory and run:
cd murmur
python src/main.py

Or navigate directly to the source directory:

cd murmur/src
python main.py
  1. Enter your queries when prompted. Type 'quit' to exit.

Example interaction:

Enter your message: What is the capital of France?

Response: The capital of France is Paris.
Confidence: 0.95

πŸ—οΈ Architecture

The system follows a pipeline architecture:

  1. User Input β†’ Interpreter Agent

    • Analyzes core intent and context
    • Identifies implicit requirements
  2. Interpreted Message β†’ Reasoner Agent

    • Breaks down the problem
    • Develops logical approach
  3. Reasoning β†’ Generator Agent

    • Creates initial response
    • Structures content clearly
  4. Generated Content β†’ Critic Agent

    • Reviews for accuracy and completeness
    • Suggests improvements
    • Produces final version

βš™οΈ Configuration

The system uses the following default models:

  • Interpreter: mistral-nemo:latest
  • Reasoner: llama3.2-vision:11b
  • Generator: gemma2:9b
  • Critic: llama3.2-vision:11b

Models can be configured by modifying the AgentOrchestrator initialization.

πŸ” Error Handling

The system implements multiple layers of error handling:

  • Connection retries (max 3 attempts)
  • Timeout management
  • Graceful degradation
  • Comprehensive error logging

🀝 Contributing

  1. Fork the repository (https://github.com/KazKozDev/murmur/fork)
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

πŸ“ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸͺ² Known Issues

  • High CPU usage with multiple concurrent requests
  • Memory consumption may increase with long conversations
  • Some LLM models may require significant local resources

πŸ”œ Future Improvements

  • Add support for streaming responses
  • Implement agent personality customization
  • Add websocket support for real-time communication
  • Enhance conversation memory management
  • Add support for more LLM providers
  • Implement response caching

πŸ“ž Support

For support, please open an issue in the GitHub repository or contact the maintainers.

πŸ™ Acknowledgments

  • Thanks to the Ollama team for their local LLM server
  • Inspired by multi-agent architectures in AI systems

About

πŸ”„ Sophisticated multi-agent LLM system orchestrating specialized AI agents for high-accuracy processing. Integrates Interpreter, Reasoner, Generator, and Critic agents using Gemma, Mistral and Llama models.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages