diff --git a/AgenticAGI_Python/AgenticAGI.egg-info/PKG-INFO b/AgenticAGI_Python/AgenticAGI.egg-info/PKG-INFO deleted file mode 100644 index 7d67c48..0000000 --- a/AgenticAGI_Python/AgenticAGI.egg-info/PKG-INFO +++ /dev/null @@ -1,199 +0,0 @@ -Metadata-Version: 2.1 -Name: AgenticAGI -Version: 0.1.3 -Summary: Python wrapper for AGI executable -Home-page: https://github.com/simulanics/AgenticAGI -Author: Simulanics Technologies -Author-email: mcombatti@simulanics.org -Classifier: Programming Language :: Python :: 3 -Classifier: Operating System :: OS Independent -Requires-Python: >=3.6 -Description-Content-Type: text/markdown -License-File: LICENSE - -# AgenticAGI - by [Simulanics Technologies](https://www.simulanics.com) - -AgenticAGI is Strawberry Logic for ALL LLMs - The AGI system allows all LLMs to operate like OpenAI's o1-preview/o1-mini models - only more useful... - -**Welcome to AgenticAGI โ€“ The Future of Autonomous AI Reasoning** ๐Ÿค–๐Ÿš€ - -AgenticAGI is a cutting-edge AI system that turns any large language model (LLM) into a powerful, self-correcting, deeply reasoning agent. By implementing a transparent and iterative thought process, AgenticAGI delivers more accurate, flexible, and efficient interactions, providing continuous improvement with every task. Whether you're solving complex problems, running code, or generating data visualizations, AgenticAGI is your comprehensive problem-solving partner. - -![alt text](https://www.simulanics.com/wp-content/uploads/2024/09/Screenshot-2024-09-15-164938.png) - -## Features - -- โœ… **Deep Chain-of-Thought Reasoning** - Unlike traditional AI systems that generate quick responses based on surface-level understanding, AgenticAGI dives deeper to ensure that every answer is logical and highly accurate. - -- โœ… **Self-Correction & Confidence Scoring** - AgenticAGI doesnโ€™t just provide answersโ€”it self-corrects and attaches confidence scores, ensuring reliable outcomes for any task. - -- โœ… **Transparency & Full Auditability** - Gain full control and insight into AgenticAGIโ€™s reasoning process. Every decision can be reviewed, audited, and trusted. - -- โœ… **Multi-Model Compatibility** - AgenticAGI works with various LLM providers, including OpenAI, Ollama, Groq, and more, offering flexibility without being tied to a single ecosystem. - -- โœ… **Reinforcement Learning** - Continuous learning ensures that the system becomes more efficient and accurate over time, adapting to user needs. - -- โœ… **Real-Time Actions** - AgenticAGI is capable of web searches ๐ŸŒ, executing Python scripts ๐Ÿ, interacting with hardware ๐Ÿ› ๏ธ, and more, offering solutions beyond simple responses. - -![alt text](https://www.simulanics.com/wp-content/uploads/2024/09/xx21.png) - -![alt text](https://www.simulanics.com/wp-content/uploads/2024/09/wifi-learning.png) - -## Installation - -### 1. Download ๐Ÿ’พ - -Choose your preferred OS from the releases page: - -- [macOS 64 bit](#) -- [macOS ARM 64 bit](#) -- [Windows 32 bit](#) -- [Windows 64 bit](#) -- [Linux 32 bit](#) -- [Linux 64 bit](#) -- [Raspberry Pi ARM 32/64 bit](#) - -### 2. Extract ๐Ÿ“‚ - -Once downloaded, extract the contents to a directory of your choice. **AgenticAGI** can even run directly from a USB for ultimate portability! ๐Ÿ–ฅ๏ธ๐Ÿ’ก - -### 3. Run โ–ถ๏ธ - -Navigate to the directory and simply execute the appropriate file for your platform (including required command-line flags): - -- For Windows: `agi.exe` -- For Linux/macOS: `./agi` - -Youโ€™re ready to unlock the full potential of AGI reasoning! ๐Ÿ”“๐Ÿค– - -**MacOS users will need to run the included `codesign.sh` file in order to sign and run the AGI locally.** - -## Usage - -AgenticAGI is a command-line tool that can be customized to fit your needs. Hereโ€™s a basic example of how to get started: - -### Interactive Mode ๐Ÿ’ฌ - -Launch AgenticAGI in interactive mode where it awaits your commands: - -```bash -agi --interactive true --cooldown 3 --apikey YOUR_API_KEY --model llama-3.1-70b-versatile -``` - -### Fully Autonomous Mode ๐Ÿค–โš™๏ธ - -Run a task autonomously, such as retrieving a WiFi password, with no human-in-the-middle: - -```bash -agi --cooldown 3 --apikey YOUR_API_KEY --model llama-3.1-70b-versatile --task "Retrieve WiFi password for 'TheNet'" -``` - -### Available Flags ๐Ÿ - -- `--apiendpoint` : Completions URL endpoint (DEFAULT = `https://api.groq.com/openai/v1/chat/completions`). -- `--apikey` : LLM API key. -- `--confidence` : (Confidence/Truthfulness/Satisfaction/Validity) score (0-100%) of the final answer, provided in JSON format (DEFAULT = True). -- `--contextlimit` : Maximum number of memories used to make decisions (DEFAULT = 50). -- `--cooldown` : Duration in seconds between LLM requests (DEFAULT = 10). -- `--fao` : Final Answer Only. If set to true, see `--schema` flag for structured data output. -- `--hitm` : Human-in-the-middle allows the AGI to ask the user for information during task completion (DEFAULT = True). -- `--inputprice` : Set the INPUT price of tokens per million. See `--showprice` flag. -- `--interactive` : Start Agentic AGI in interactive mode. Each task run is a new session (DEFAULT = False). -- `--maxcorrections` : Sets the self-correction limit (DEFAULT = 3). -- `--model` : LLM model to use. -- `--newlines` : Outputs newlines as \n (DEFAULT = False). -- `--outputprice` : Set the OUTPUT price of tokens per million. See `--showprice` flag. -- `--pytimeout` : Python script timeout period (in seconds) before Python is terminated and control is returned to the AGI (DEFAULT = 60 sec). -- `--schema` : A defined custom tag (`{query}`) or JSON structure (`{ "ans": {{final_response}} }`) for the 'final answer'/response. See `--fao` flag. -- `--selfcorrect` : Allow the model to self-correct when confidence/truthfulness < 92%, satisfaction < 85%, or inaccurate/invalid perception > 3% (DEFAULT = False). When enabled, confidence will auto-enable. See `--maxcorrections` flag. -- `--showprice` : Show token usage and input/output pricing per task completion. See `--inputprice` and `--outputprice` flags (DEFAULT = False). -- `--steplimit` : The maximum number of steps the AGI can perform before answering or giving up (DEFAULT = 25). -- `--task` : The task to be solved or completed. -- `--urlencode` : Encodes output using URL encoding (DEFAULT = False). - -**Note**: By default, Groq is the default LLM service if an `--apiendpoint` is not specified. - -For a full list of commands, run: - -```bash -agi --help -``` - -## Examples ๐Ÿ“– - -### Example 1: Solving Advanced Problems ๐Ÿ - -```bash -agi --task "What is the integral of x^2?" --model llama-3.1-70b-versatile --apikey YOUR_API_KEY -``` - -AgenticAGI will not only give you the answer but also walk through the steps to reach that conclusion using Python! โœจ๐Ÿ“ - -### Example 2: Automating a Web Search and Content Creation ๐ŸŒ - -```bash -agi --task "Search the web for the latest AI trends in 2024 and write a report" --model llama-3.1-70b-versatile --apikey YOUR_API_KEY -``` - -AgenticAGI can browse the web, synthesize information, and deliver actionable insights. ๐Ÿ“Š - -### Example 3: Hardware Interaction ๐Ÿ› ๏ธ - -```bash -agi --task "Install a new Python package and run a script" --model llama-3.1-70b-versatile --apikey YOUR_API_KEY -``` - -AgenticAGI can autonomously install tools, manage system resources, and run code. โš™๏ธ๐Ÿ’ป - -## Why AgenticAGI? - -AgenticAGI stands out because it doesnโ€™t just provide answers โ€“ it **reasons** through problems. Its adaptive learning algorithms allow it to get better with every use, making it the perfect tool for developers, data analysts, and businesses looking to leverage the next level of AI technology. ๐ŸŒŸ - -### Key Benefits ๐Ÿ”‘ - -- **Increased Accuracy**: By thinking through problems, AgenticAGI reduces errors and provides more reliable outcomes. -- **Reduced Token Usage**: Efficiency in solving multi-step problems reduces the number of tokens used, cutting down costs ๐Ÿ’ฐ. -- **Flexible Deployment**: Use AgenticAGI with any LLM provider, whether proprietary or open-source. -- **Future-Proof**: Decentralized learning ensures that AgenticAGI stays on the cutting edge of AI technology without losing past abilities. - -## Future Updates ๐Ÿš€ - -We are continually improving AgenticAGI, with upcoming features such as: - -- **Cloud-Learning Sync**: Share learned abilities with other AGI systems to enhance collective intelligence. -- **Deep Memory**: Enable AGI to retain and apply knowledge across long-term tasks, improving efficiency and learning rates. - -## Testimonials ๐Ÿ—ฃ๏ธ - -> โ€œAgenticAGI has been a game-changer for our data analysis team. It not only provides accurate results but shows us the reasoning behind each step.โ€ -> *โ€“ Samantha T., Data Scientist* - -> โ€œIts ability to reason like a human sets AgenticAGI apart. Iโ€™ve used it to write and debug code, and it performs with precision and accuracy.โ€ -> *โ€“ Carlos M., Senior Developer* - -> โ€œThe transparency and self-correcting features have saved us countless hours. AgenticAGI doesnโ€™t just give answersโ€”it verifies them, ensuring we get the best results.โ€ -> *โ€“ Michael D., Data Analyst* - -## Contributing ๐Ÿค - -We welcome contributions! Feel free to submit issues or pull requests as we continue to grow this project. Check out our [contribution guidelines](CONTRIBUTING.md) to get started. - -## License ๐Ÿ“œ - -This project is licensed under the MIT License - see the [LICENSE](LICENSE.md) file for details. - -## Get in Touch ๐Ÿ“ฌ - -For any questions, feedback, or business inquiries, contact - - us via [Simulanics Technologies](https://www.simulanics.com/contact). - ---- - -**Unlock the potential of AgenticAGI** โ€“ Your evolving, reasoning AI system that adapts to your needs. Start today and experience the future of AI! ๐ŸŒ๐Ÿค– diff --git a/AgenticAGI_Python/AgenticAGI.egg-info/SOURCES.txt b/AgenticAGI_Python/AgenticAGI.egg-info/SOURCES.txt deleted file mode 100644 index b4f59d6..0000000 --- a/AgenticAGI_Python/AgenticAGI.egg-info/SOURCES.txt +++ /dev/null @@ -1,10 +0,0 @@ -LICENSE -MANIFEST.in -README.md -setup.py -AgenticAGI.egg-info/PKG-INFO -AgenticAGI.egg-info/SOURCES.txt -AgenticAGI.egg-info/dependency_links.txt -AgenticAGI.egg-info/top_level.txt -agenticagi/__init__.py -agenticagi/agi_wrapper.py \ No newline at end of file diff --git a/AgenticAGI_Python/AgenticAGI.egg-info/dependency_links.txt b/AgenticAGI_Python/AgenticAGI.egg-info/dependency_links.txt deleted file mode 100644 index 8b13789..0000000 --- a/AgenticAGI_Python/AgenticAGI.egg-info/dependency_links.txt +++ /dev/null @@ -1 +0,0 @@ - diff --git a/AgenticAGI_Python/AgenticAGI.egg-info/top_level.txt b/AgenticAGI_Python/AgenticAGI.egg-info/top_level.txt deleted file mode 100644 index f8dafa5..0000000 --- a/AgenticAGI_Python/AgenticAGI.egg-info/top_level.txt +++ /dev/null @@ -1 +0,0 @@ -agenticagi diff --git a/AgenticAGI_Python/build/lib/agenticagi/__init__.py b/AgenticAGI_Python/build/lib/agenticagi/__init__.py deleted file mode 100644 index e69de29..0000000 diff --git a/AgenticAGI_Python/build/lib/agenticagi/agi_wrapper.py b/AgenticAGI_Python/build/lib/agenticagi/agi_wrapper.py deleted file mode 100644 index ef267f1..0000000 --- a/AgenticAGI_Python/build/lib/agenticagi/agi_wrapper.py +++ /dev/null @@ -1,206 +0,0 @@ -import os -import platform -import subprocess -import threading -import json -import asyncio # Import asyncio for async calls -import inspect -import urllib.parse # Import for URL decoding - -class AGIWrapper: - def __init__(self, api_key, task, exe_path, apiendpoint="https://api.groq.com/openai/v1/chat/completions", - confidence=None, contextlimit=50, cooldown=10, fao=False, hitm=True, inputprice=None, - interactive=False, maxcorrections=3, model=None, outputprice=None, pytimeout=60, schema=None, - selfcorrect=False, showprice=False, steplimit=25, colormode=False): - """ - Initialize the AGIWrapper class. - - :param api_key: API key for authentication. - :param task: Task for the AGI to execute. - :param exe_path: Path to the executable file. - :param apiendpoint: API endpoint URL. - :param confidence: Optional confidence parameter. - :param contextlimit: Limit for the context size. - :param cooldown: Cooldown time between executions. - :param fao: Flag to enable or disable First-Action-Only mode. - :param hitm: Flag to enable or disable Hit-The-Mark mode. - :param inputprice: Optional price for input. - :param interactive: Flag to enable or disable interactive mode. - :param maxcorrections: Maximum number of corrections allowed. - :param model: Model identifier. - :param outputprice: Optional price for output. - :param pytimeout: Timeout for the Python process. - :param schema: Optional schema parameter. - :param selfcorrect: Flag to enable or disable self-correction mode. - :param showprice: Flag to show or hide the price. - :param steplimit: Step limit for execution. - """ - self.api_key = api_key - self.task = task - self.apiendpoint = apiendpoint - self.confidence = confidence - self.contextlimit = contextlimit - self.cooldown = cooldown - self.fao = fao - self.hitm = hitm - self.inputprice = inputprice - self.interactive = interactive - self.maxcorrections = maxcorrections - self.model = model - self.outputprice = outputprice - self.pytimeout = pytimeout - self.schema = schema - self.selfcorrect = selfcorrect - self.showprice = showprice - self.steplimit = steplimit - self.colormode = colormode - - # Ensure the executable path is provided - if not exe_path: - raise ValueError("AGI executable path must be specified.") - self.exe_path = exe_path - - # Define callbacks - self.on_thought = None - self.on_action = None - self.on_observation = None - self.on_final_answer = None - self.on_ctsi_score = None # Callback for CTSI score - - def set_callbacks(self, on_thought=None, on_action=None, on_observation=None, on_final_answer=None, on_ctsi_score=None): - """Set callbacks for various events.""" - self.on_thought = on_thought - self.on_action = on_action - self.on_observation = on_observation - self.on_final_answer = on_final_answer - self.on_ctsi_score = on_ctsi_score # Set the CTSI score callback - - async def _handle_output(self, line): - """ - This method processes the output from the AGI and calls the appropriate callbacks. - """ - if "Thought:" in line and self.on_thought: - thought_data = line.split("Thought:", 1)[1].strip() - if inspect.iscoroutinefunction(self.on_thought): - await self.on_thought(thought_data) - else: - self.on_thought(thought_data) # Non-async callback - - elif "Action:" in line and self.on_action: - action_data = line.split("Action:", 1)[1].strip() - if inspect.iscoroutinefunction(self.on_action): - await self.on_action(action_data) - else: - self.on_action(action_data) # Non-async callback - - elif "Observation:" in line and self.on_observation: - observation_data = line.split("Observation:", 1)[1].strip() - if inspect.iscoroutinefunction(self.on_observation): - await self.on_observation(observation_data) - else: - self.on_observation(observation_data) # Non-async callback - - elif "FINAL ANSWER:" in line and self.on_final_answer: - answer_data = line.split("FINAL ANSWER:", 1)[1].strip() - - if "CTSI Score:" in answer_data: - final_answer_data = answer_data.split("CTSI Score:", 1)[0].strip() - CTSILine = line.split("CTSI Score:", 1)[1].strip() - - if inspect.iscoroutinefunction(self.on_final_answer): - await self.on_final_answer(final_answer_data) - else: - self.on_final_answer(final_answer_data) # Non-async callback - - - try: - ctsi_scores = json.loads(CTSILine) - if inspect.iscoroutinefunction(self.on_ctsi_score): - await self.on_ctsi_score(ctsi_scores) - else: - self.on_ctsi_score(ctsi_scores) # Non-async callback - except json.JSONDecodeError: - pass # Ignore lines that are not valid JSON - else: - - if inspect.iscoroutinefunction(self.on_final_answer): - await self.on_final_answer(answer_data) - else: - self.on_final_answer(answer_data) # Non-async callback - - - elif line.startswith('{') and self.on_ctsi_score: - try: - ctsi_scores = json.loads(line) - if inspect.iscoroutinefunction(self.on_ctsi_score): - await self.on_ctsi_score(ctsi_scores) - else: - self.on_ctsi_score(ctsi_scores) # Non-async callback - except json.JSONDecodeError: - pass # Ignore lines that are not valid JSON - - def _read_process_output(self, process): - """Reads the output from the process and handles it via callbacks.""" - async def read_output(): - output_buffer = "" - for line in iter(process.stdout.readline, ''): - # Decode the URL-encoded line - decoded_line = urllib.parse.unquote(line) - output_buffer += decoded_line # Accumulate decoded lines - - # Check if there's a complete thought/action/observation in the buffer - if "Thought:" in output_buffer: - await self._handle_output(output_buffer) - output_buffer = "" # Clear the buffer after handling - elif "Action:" in output_buffer: - await self._handle_output(output_buffer) - output_buffer = "" - elif "Observation:" in output_buffer: - await self._handle_output(output_buffer) - output_buffer = "" - elif "Final Answer:" in output_buffer: - await self._handle_output(output_buffer) - output_buffer = "" - - # Process any remaining content in the buffer after the loop - if output_buffer: - await self._handle_output(output_buffer) - - asyncio.run(read_output()) - - def execute(self): - """Run the executable with the provided arguments.""" - cmd = [ - self.exe_path, "--apikey", self.api_key, "--task", self.task, "--apiendpoint", self.apiendpoint, - "--contextlimit", str(self.contextlimit), "--cooldown", str(self.cooldown), "--fao", str(self.fao), - "--hitm", str(self.hitm), "--interactive", str(self.interactive), "--maxcorrections", str(self.maxcorrections), - "--pytimeout", str(self.pytimeout), "--selfcorrect", str(self.selfcorrect), "--showprice", str(self.showprice), - "--steplimit", str(self.steplimit), "--urlencode", "True", "--colormode", str(self.colormode) - ] - - # Optional arguments - if self.confidence is not None: - cmd.extend(["--confidence", str(self.confidence)]) - if self.inputprice is not None: - cmd.extend(["--inputprice", str(self.inputprice)]) - if self.outputprice is not None: - cmd.extend(["--outputprice", str(self.outputprice)]) - if self.model is not None: - cmd.extend(["--model", self.model]) - if self.schema is not None: - cmd.extend(["--schema", self.schema]) - - # Print the command for debugging purposes - print("Command being executed:", " ".join(cmd)) - - # Suppress output by redirecting stdout and stderr to subprocess.PIPE - process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=1, text=True) - - # Run the process output reader in a separate thread - output_thread = threading.Thread(target=self._read_process_output, args=(process,)) - output_thread.start() - - process.wait() # Wait for the process to finish - output_thread.join() - - return process.returncode