Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhanced Task Processing #333

Open
CodingKoalaGeneral opened this issue Sep 30, 2024 · 1 comment
Open

Enhanced Task Processing #333

CodingKoalaGeneral opened this issue Sep 30, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@CodingKoalaGeneral
Copy link

CodingKoalaGeneral commented Sep 30, 2024

Action Item Processing System Using Ollama

Self-thinking Thesis with Code Implementation

Introduction

Optimizing task processing is crucial in artificial intelligence and task automation for achieving efficiency and accuracy. This code example leverages a system of self-querying, context optimization, action item integration, adjustable thinking depth, and summarization without losing essential details. Utilizing the LLMs with Ollama, the system dynamically enhances task management by intelligently decomposing tasks into action items, optimizing context, and integrating action items to produce coherent and concise outputs.

The system also highlights interconnections between action items and could be extended to include auto-debugging and error-response back-checking, further improving efficiency and reliability.


System Components and Implementation

1. Self-Querying for Task Decomposition into Action Items

The system intelligently decomposes complex tasks into manageable action items by self-querying the language model.

Implementation:

def get_action_items(description, context):
    prompt = f"""
Using only the following context:

{context}

Determine if the task can be broken down into action items. If it can, list them as a numbered list under "Action Items:". If the task is simple and cannot be broken down further, reply with "No Action Items".

Task:
{description}

Response:
"""
    response = run_ollama_prompt(prompt)
    action_items = parse_action_items(response)
    return action_items

2. Context Optimization and Cleaning

By focusing solely on the essential information required for each action item, the system ensures that context is optimized and cleansed of irrelevant data.

Implementation:

def get_required_context(description):
    prompt = f"""
Identify the essential context required to accomplish the following task. Provide only the necessary information without omitting important details.

Task:
{description}

Necessary Context:
"""
    context = run_ollama_prompt(prompt)
    return context

3. Highlighting Interconnections Between Action Items

The system identifies dependencies or relationships between action items, ensuring their integration leads to coherent and reasonable overall solutions. Interconnections are highlighted to emphasize their importance.

Implementation:

def highlight_interconnections(action_items, context):
    prompt = f"""
Given the following action items and context, identify and highlight any interconnections or dependencies between them. Explain how they are related in detail.

Action Items:
{chr(10).join(['- ' + action_item for action_item in action_items])}

Context:
{context}

Highlighted Interconnections:
"""
    interconnections = run_ollama_prompt(prompt)
    return interconnections

4. Adjustable Thinking Depth with Automatic Simplification

Users can set the depth of recursive processing to balance detail and efficiency. The system automatically stops decomposing tasks if they are simple enough.

Implementation:

MAX_DEPTH = 3  # Adjustable thinking depth

def set_max_depth(depth):
    global MAX_DEPTH
    MAX_DEPTH = depth
def process_task(task_node):
    if task_node.depth >= MAX_DEPTH:
        task_node.context = get_required_context(task_node.description)
        task_node.result = execute_task(task_node.description, task_node.context)
        return

    task_node.context = get_required_context(task_node.description)
    action_items = get_action_items(task_node.description, task_node.context)

    if action_items:
        interconnections = highlight_interconnections(action_items, task_node.context)
        task_node.interconnections = interconnections

        for action_item_desc in action_items:
            sub_task_node = TaskNode(action_item_desc, depth=task_node.depth + 1)
            task_node.sub_tasks.append(sub_task_node)
            sub_task_node.context = get_required_context(sub_task_node.description)
            process_task(sub_task_node)
        task_node.result = integrate_action_items(task_node)
    else:
        task_node.result = execute_task(task_node.description, task_node.context)

5. Comprehensive Final Responses with Summarization

The system summarizes final outputs to be concise while retaining all important information, ensuring comprehensive responses without losing essential details.

Implementation:

def integrate_action_items(task_node):
    combined_results = "\n\n".join([
        f"Action Item: {sub_task.description}\nResult:\n{sub_task.result}"
        for sub_task in task_node.sub_tasks
    ])

    prompt = f"""
Given the following combined results of action items, verify if they are coherent and reasonable together. Identify inconsistencies if any.

Combined Results:
{combined_results}

Assessment:
"""
    assessment = run_ollama_prompt(prompt)
    task_node.assessment = assessment

    summarized_result = generate_summary(combined_results)

    final_result = f"{summarized_result}\n\nHighlighted Interconnections:\n{task_node.interconnections}\n\nAssessment:\n{assessment}"
    return final_result
def execute_task(description, context):
    prompt = f"""
Using the context:

{context}

Provide a detailed plan to accomplish the following task, including all necessary steps and important details.

Task:
{description}

Detailed Plan:
"""
    detailed_plan = run_ollama_prompt(prompt)
    summarized_plan = generate_summary(detailed_plan)
    return summarized_plan
def generate_summary(detailed_result):
    prompt = f"""
Summarize the following detailed plan into a concise version, ensuring that all important details are retained and no essential information is omitted.

Detailed Plan:
{detailed_result}

Concise Summary:
"""
    summary = run_ollama_prompt(prompt)
    return summary

6. Task Representation with TaskNode Class

The TaskNode class represents each task or action item, holding relevant data for processing.

Implementation:

class TaskNode:
    def __init__(self, description, depth=0):
        self.description = description
        self.sub_tasks = []
        self.result = ""
        self.context = ""
        self.depth = depth
        self.interconnections = ""
        self.assessment = ""

7. Auto Debugging and Error-Response Backchecking

To enhance reliability, the system integrates auto-debugging by capturing errors that occur during the execution of code suggested by the LLM. When a task produces errors, the system captures the debug output and feeds it back into the LLM for analysis and suggestions on how to correct the issue. This iterative process provides automatic error-handling, feedback, and correction.

Implementation:

import subprocess

def run_code_with_auto_debugging(code):
    try:
        # Run the suggested code in a subprocess
        result = subprocess.run(
            ['python', '-c', code],  # Assuming Python code is being executed
            capture_output=True, text=True
        )
        
        if result.returncode == 0:
            # No errors, return the standard output
            return result.stdout.strip()
        else:
            # Capture and log the error
            error_message = result.stderr.strip()
            print(f"Error encountered: {error_message}")
            
            # Feed the error back to the LLM for debugging suggestions
            debug_prompt = f"""
The following code produced an error:

Code:
{code}

Error:
{error_message}

Please suggest corrections to fix the error.
"""
            corrected_code = run_ollama_prompt(debug_prompt)
            
            # Optionally, try rerunning the corrected code automatically
            print("Attempting to rerun with corrected code...")
            return run_code_with_auto_debugging(corrected_code)
    
    except Exception as e:
        print(f"An error occurred: {e}")
        raise

def run_ollama_prompt(prompt):
    try:
        # Interact with the LLM to generate the necessary output
        result = subprocess.run(
            ['ollama', 'run', 'llama2', '--prompt', prompt.strip()],
            capture_output=True, text=True
        )
        if result.returncode == 0:
            return result.stdout.strip()
        else:
            raise Exception(f"LLM encountered an error: {result.stderr.strip()}")
    
    except Exception as e:
        print(f"An error occurred with the LLM: {e}")
        raise
  1. Capturing Code Execution Errors: The system runs the LLM-suggested code (e.g., Python code) and captures any runtime or compilation errors.

  2. Feeding Errors Back to the LLM: When an error occurs, the system constructs a new prompt that provides the LLM with the code and the associated error message, asking for suggestions to fix the issue.

  3. Iterative Debugging: After receiving corrections from the LLM, the system attempts to rerun the code with the suggested fixes. This creates an automated feedback loop that continuously refines the code until it executes successfully or reaches a terminal state.

Considerations:

  • Flexibility: This implementation assumes Python code execution, but you can generalize this to other languages by modifying the command in the subprocess.run call.
  • Recursion Limit: You might want to set a limit on the number of retries to avoid an infinite loop if the LLM continuously produces incorrect code.

Example Usage

To illustrate the use case of this system and demonstrate how it operates in practice, let's consider a comprehensive coding example.

Scenario:

A software engineer wants to develop a secure web application that includes user authentication, data encryption, and a RESTful API for database interactions. The engineer inputs the following task description:

user_input = """
Develop a secure web application that includes user authentication, data encryption, and a RESTful API for database interactions.
"""

Example Usage Code:

if __name__ == "__main__":
    # Set the adjustable thinking depth to allow for sufficient task decomposition
    set_max_depth(3)

    # User input representing a complex coding task
    user_input = """
    Develop a secure web application that includes user authentication, data encryption, and a RESTful API for database interactions.
    """

    # Analyze the input to create primary task nodes
    task_nodes = analyze_input(user_input)

    # Process each task node
    for task_node in task_nodes:
        process_task(task_node)

    # Output the results for each task
    for task_node in task_nodes:
        print(f"\nTask: {task_node.description}")
        print(f"Summary:\n{task_node.result}")

Expected Output:

Task: Develop a secure web application

Summary:
- Designed a web application with a secure architecture using industry best practices.
- Implemented user authentication with secure password hashing and account management.
- Integrated data encryption for sensitive data both in transit and at rest.
- Developed a RESTful API for database interactions, following proper authentication and authorization protocols.

Highlighted Interconnections:
- User authentication is essential for securing the RESTful API, ensuring only authorized users can access database interactions.
- Data encryption protects sensitive information handled during authentication and API transactions.
- The secure web architecture underpins all components, integrating authentication, encryption, and API securely.

Assessment:
The action items are coherent and interrelated, ensuring the web application is secure and functional. No inconsistencies detected.

Task: includes user authentication

Summary:
- Chose a reliable authentication framework (e.g., JWT, OAuth 2.0).
- Implemented user registration and login functionalities with input validation.
- Used bcrypt for password hashing to securely store user credentials.
- Set up session management and token-based authentication for API access.

Assessment:
User authentication is securely implemented, following best practices to protect user data.

Task: data encryption

Summary:
- Implemented SSL/TLS for encrypting data in transit.
- Used AES encryption for sensitive data stored in the database.
- Managed encryption keys securely using a key management service.
- Ensured compliance with data protection regulations.

Assessment:
Data encryption is comprehensively addressed, safeguarding data both in transit and at rest.

Task: a RESTful API for database interactions

Summary:
- Designed RESTful endpoints for CRUD operations.
- Ensured API endpoints are secured with proper authentication and authorization checks.
- Used JSON Web Tokens (JWT) for secure API communication.
- Implemented input sanitization to prevent SQL injection and other attacks.

Assessment:
The RESTful API is well-designed and secure, enabling safe database interactions.

Explanation:

  • Action Items Decomposition: The system has broken down the complex primary task into manageable action items, such as implementing user authentication, data encryption, and developing a RESTful API.

  • Context Optimization: For each action item, the system has retrieved the necessary context to ensure relevant information is used during processing.

  • Interconnections Highlighted: The system has identified and highlighted interconnections between action items, emphasizing how they are related and dependent on each other.

  • Sub-Task Processing: Each action item is processed recursively, respecting the adjustable thinking depth, and further broken down if necessary.

  • Integration and Summarization: The results of the action items are integrated and summarized, providing concise yet comprehensive outputs.

  • Assessment: The system assesses the coherence and reasonability of the action items, ensuring that the final plan is consistent and covers all important aspects.

  • Auto Debugging and Error Handling: The implementation includes error checking during the execution of code, with the capability to automatically capture errors, feed them back to the LLM for debugging suggestions, and attempt corrections to resolve any issues that arise.

Demonstrated Use Case:

This example showcases how the system can handle complex coding tasks typical in software development. By automatically decomposing tasks into action items, optimizing context, highlighting interconnections, and integrating results, the system assists developers in generating detailed implementation plans that are both actionable and aligned with best practices. The inclusion of auto debugging and error response back checking enhances reliability, leading to fewer inputs required for the same optimized coding result.


Appendix: Full Code Listing

import re
import subprocess

MAX_DEPTH = 3  # Adjustable thinking depth

def set_max_depth(depth):
    global MAX_DEPTH
    MAX_DEPTH = depth

class TaskNode:
    def __init__(self, description, depth=0):
        self.description = description
        self.sub_tasks = []
        self.result = ""
        self.context = ""
        self.depth = depth
        self.interconnections = ""
        self.assessment = ""

def analyze_input(user_input):
    tasks = re.split(r'\band\b|,', user_input)
    return [TaskNode(task.strip()) for task in tasks if task.strip()]

def process_task(task_node):
    if task_node.depth >= MAX_DEPTH:
        task_node.context = get_required_context(task_node.description)
        task_node.result = execute_task(task_node.description, task_node.context)
        return

    task_node.context = get_required_context(task_node.description)
    action_items = get_action_items(task_node.description, task_node.context)

    if action_items:
        interconnections = highlight_interconnections(action_items, task_node.context)
        task_node.interconnections = interconnections

        for action_item_desc in action_items:
            sub_task_node = TaskNode(action_item_desc, depth=task_node.depth + 1)
            task_node.sub_tasks.append(sub_task_node)
            sub_task_node.context = get_required_context(sub_task_node.description)
            process_task(sub_task_node)
        task_node.result = integrate_action_items(task_node)
    else:
        task_node.result = execute_task(task_node.description, task_node.context)

def get_required_context(description):
    prompt = f"""
Identify the essential context required to accomplish the following task. Provide only the necessary information without omitting important details.

Task:
{description}

Necessary Context:
"""
    context = run_ollama_prompt(prompt)
    return context

def get_action_items(description, context):
    prompt = f"""
Using only the following context:

{context}

Determine if the task can be broken down into action items. If it can, list them as a numbered list under "Action Items:". If the task is simple and cannot be broken down further, reply with "No Action Items".

Task:
{description}

Response:
"""
    response = run_ollama_prompt(prompt)
    action_items = parse_action_items(response)
    return action_items

def parse_action_items(response):
    action_items = []
    if "No Action Items" in response:
        return action_items

    lines = response.strip().split('\n')
    for line in lines:
        line = line.strip()
        if re.match(r'^\d+\.\s+.*', line):
            action_item = re.sub(r'^\d+\.\s+', '', line)
            action_items.append(action_item)
    return action_items

def highlight_interconnections(action_items, context):
    prompt = f"""
Given the following action items and context, identify and highlight any interconnections or dependencies between them. Explain how they are related in detail.

Action Items:
{chr(10).join(['- ' + action_item for action_item in action_items])}

Context:
{context}

Highlighted Interconnections:
"""
    interconnections = run_ollama_prompt(prompt)
    return interconnections

def integrate_action_items(task_node):
    combined_results = "\n\n".join([
        f"Action Item: {sub_task.description}\nResult:\n{sub_task.result}"
        for sub_task in task_node.sub_tasks
    ])

    prompt = f"""
Given the following combined results of action items, verify if they are coherent and reasonable together. Identify inconsistencies if any.

Combined Results:
{combined_results}

Assessment:
"""
    assessment = run_ollama_prompt(prompt)
    task_node.assessment = assessment

    summarized_result = generate_summary(combined_results)

    final_result = f"{summarized_result}\n\nHighlighted Interconnections:\n{task_node.interconnections}\n\nAssessment:\n{assessment}"
    return final_result

def execute_task(description, context):
    prompt = f"""
Using the context:

{context}

Provide a detailed plan to accomplish the following task, including all necessary steps and important details.

Task:
{description}

Detailed Plan:
"""
    detailed_plan = run_ollama_prompt(prompt)
    summarized_plan = generate_summary(detailed_plan)
    return summarized_plan

def generate_summary(detailed_result):
    prompt = f"""
Summarize the following detailed plan into a concise version, ensuring that all important details are retained and no essential information is omitted.

Detailed Plan:
{detailed_result}

Concise Summary:
"""
    summary = run_ollama_prompt(prompt)
    return summary

def run_ollama_prompt(prompt):
    try:
        # Interact with the LLM to generate the necessary output
        result = subprocess.run(
            ['ollama', 'run', 'llama2', '--prompt', prompt.strip()],
            capture_output=True, text=True
        )
        if result.returncode == 0:
            return result.stdout.strip()
        else:
            # If there's an error with the LLM itself
            error_message = result.stderr.strip()
            raise Exception(f"LLM encountered an error: {error_message}")
    
    except Exception as e:
        # Handle LLM-specific errors
        print(f"An error occurred with the LLM: {e}")
        raise

def run_code_with_auto_debugging(code):
    try:
        # Run the suggested code in a subprocess
        result = subprocess.run(
            ['python', '-c', code],  # Modify this to fit your desired language/execution environment
            capture_output=True, text=True
        )
        
        if result.returncode == 0:
            # No errors, return the standard output
            return result.stdout.strip()
        else:
            # Capture and log the error from code execution
            error_message = result.stderr.strip()
            print(f"Error encountered: {error_message}")
            
            # Feed the error back to the LLM for debugging suggestions
            debug_prompt = f"""
The following code produced an error:

Code:
{code}

Error:
{error_message}

Please suggest corrections to fix the error.
"""
            corrected_code = run_ollama_prompt(debug_prompt)
            
            # Optionally, try rerunning the corrected code automatically
            print("Attempting to rerun with corrected code...")
            return run_code_with_auto_debugging(corrected_code)
    
    except Exception as e:
        print(f"An error occurred: {e}")
        raise

# Example usage
if __name__ == "__main__":
    set_max_depth(3)

    user_input = """
    Develop a secure web application that includes user authentication, data encryption, and a RESTful API for database interactions.
    """
    task_nodes = analyze_input(user_input)

    for task_node in task_nodes:
        process_task(task_node)

    for task_node in task_nodes:
        print(f"\nTask: {task_node.description}")
        print(f"Summary:\n{task_node.result}")

@CodingKoalaGeneral CodingKoalaGeneral added the enhancement New feature or request label Sep 30, 2024
@CodingKoalaGeneral
Copy link
Author

CodingKoalaGeneral commented Oct 1, 2024

Integration of Decision Waypoints for Enhanced Task Evaluation and User Interaction

To further enhance the system's capability in managing complex and high-stakes tasks, decision waypoints are integrated into the task processing workflow. These waypoints enable the system to assess the complexity and importance of each sub-action item based on the specific use case and relevant best practices. When sub-action items involve critical design choices with significant consequences or when instructions are ambiguous, the system proactively engages the user to determine the appropriate course of action. This ensures that the system maintains both autonomy and alignment with user intentions, especially in scenarios requiring informed decision-making.

Purpose and Advantages

  • Dynamic Complexity and Importance Assessment: Evaluates each sub-action item's complexity and importance within the context of the specific use case, informed by industry best practices.
  • Risk Mitigation: Identifies sub-action items with substantial impact or potential risks, prompting user intervention to prevent unintended outcomes.
  • User Empowerment: In situations of high complexity or ambiguity, the system seeks user input to guide decision-making, ensuring that outcomes align with user goals and preferences.
  • Best Practices Integration: Automatically references industry best practices to inform sub-action item execution, maintaining high standards of quality and reliability.
  • Adaptive Sensitivity: The system dynamically determines its sensitivity in deciding when to seek user clarification, balancing efficiency and precision based on contextual analysis.

Implementation Framework

Default Querying of Best Practices

By default, the system queries the LLM for industry best practices relevant to each sub-action item. This ensures that task execution adheres to established standards, promoting consistency and excellence.

Implementation:

def get_best_practices(description):
    prompt = f"""
Identify and provide industry best practices for performing the following task effectively and efficiently.

Task:
{description}

Best Practices:
"""
    best_practices = run_ollama_prompt(prompt)
    return best_practices
Complexity and Importance Evaluation Based on Best Practices

The system evaluates each sub-action item's complexity and importance by analyzing both the task description and the associated best practices. This comprehensive assessment ensures a thorough understanding of each sub-action item's significance and potential impact.

Implementation:

COMPLEXITY_THRESHOLD = 7  # Scale of 1 to 10
IMPORTANCE_THRESHOLD = 7  # Scale of 1 to 10

def evaluate_complexity_and_importance(description, best_practices):
    prompt = f"""
Based on the following task and its best practices, evaluate the task's complexity and importance on a scale of 1 to 10.

Task:
{description}

Best Practices:
{best_practices}

Response:
- Complexity (1-10):
- Importance (1-10):
"""
    response = run_ollama_prompt(prompt)
    complexity = extract_value(response, "Complexity")
    importance = extract_value(response, "Importance")
    return complexity, importance

def extract_value(response, label):
    match = re.search(f"{label} \\(1-10\\):\\s*(\\d+)", response)
    if match:
        return int(match.group(1))
    else:
        return 0  # Default value if not found
Decision Waypoints and User Consultation

When a sub-action item's complexity or importance exceeds predefined thresholds, the system assesses whether to proceed autonomously or seek user input. In cases where best practices indicate potential risks or critical design choices, the system prompts the user to decide the course of action, ensuring informed and deliberate outcomes.

Implementation:

def should_consider_user_input(complexity, importance, best_practices):
    risk_factors = analyze_risks(best_practices)
    return (complexity >= COMPLEXITY_THRESHOLD or importance >= IMPORTANCE_THRESHOLD or risk_factors)

def analyze_risks(best_practices):
    prompt = f"""
Analyze the following best practices and identify any potential risks or critical design choices that may have significant consequences if not properly addressed.

Best Practices:
{best_practices}

Risks and Critical Design Choices:
"""
    risks = run_ollama_prompt(prompt)
    return bool(risks.strip())  # Returns True if risks are identified

def seek_user_decision(description, best_practices):
    prompt = f"""
The task below has been evaluated for complexity and importance based on the provided best practices. It may involve significant risks or critical design choices with substantial consequences.

Task:
{description}

Best Practices:
{best_practices}

Please choose how to proceed:
1. Provide additional instructions or preferences to guide the task execution.
2. Allow the system to proceed based on the current best practices.
3. Abort the task due to identified risks.

Your Decision:
"""
    user_decision = get_user_input(prompt)
    return user_decision

def get_user_input(prompt):
    # Placeholder for user interaction mechanism
    print(prompt)
    decision = input("Enter your choice (1/2/3): ").strip()
    return decision
Handling User Decisions

Based on the user's input at decision waypoints, the system adapts its processing strategy to align with the user's preferences and the sub-action item's requirements.

Implementation:

def handle_user_decision(decision, task_node):
    if decision == "1":
        clarification = get_user_clarification(task_node.description)
        task_node.description += "\n" + clarification
        task_node.context = get_required_context(task_node.description)
    elif decision == "2":
        # Proceed with best practices without additional input
        task_node.context = get_best_practices(task_node.description)
    elif decision == "3":
        # Abort the task processing
        task_node.result = "Task aborted by user due to identified risks."
    else:
        # Handle invalid input
        print("Invalid choice. Aborting task for safety.")
        task_node.result = "Task aborted due to invalid user input."
Dynamic Picky Level Determination

Instead of using an adjustable picky variable set by the user, the system dynamically determines its sensitivity in seeking user input based on the context and analysis of each task. This adaptive approach leverages the LLM's capabilities to assess when user intervention is most beneficial, ensuring a balance between system autonomy and necessary oversight.

Implementation:

def determine_picky_level(complexity, importance, best_practices):
    prompt = f"""
Given the task's complexity of {complexity} and importance of {importance}, along with the following best practices, determine the appropriate picky level on a scale of 1 to 10. A higher picky level means the system is more inclined to seek user intervention.

Task Complexity: {complexity}
Task Importance: {importance}

Best Practices:
{best_practices}

Determine the Picky Level (1-10):
"""
    picky_level = run_ollama_prompt(prompt).strip()
    try:
        picky_level = int(picky_level)
        picky_level = max(1, min(picky_level, 10))  # Ensures level is between 1 and 10
    except ValueError:
        picky_level = 5  # Default value if parsing fails
    return picky_level

def should_seek_user_intervention(complexity, importance, best_practices):
    base_condition = should_consider_user_input(complexity, importance, best_practices)
    if not base_condition:
        return False

    picky_level = determine_picky_level(complexity, importance, best_practices)

    # Query the LLM to determine the necessity based on picky level
    if 4 <= picky_level < 7:
        prompt = f"""
Given the task's complexity of {complexity} and importance of {importance}, and the determined picky level of {picky_level}, should the system seek user intervention? Respond with "Yes" or "No".

Task Complexity: {complexity}
Task Importance: {importance}
Picky Level: {picky_level}

Response:
"""
        decision = run_ollama_prompt(prompt).strip().lower()
        return decision == "yes"
    elif picky_level >= 7:
        return True
    elif picky_level < 4:
        return False
    return False
Integration of Decision Waypoints into Task Processing

The core process_task function is augmented to incorporate decision waypoints, ensuring that each sub-action item undergoes complexity and importance evaluation, followed by appropriate user interaction when necessary.

Implementation:

def process_task(task_node):
    if task_node.depth == 0:
        # For the main task, retrieve action items without evaluating complexity and importance
        task_node.context = get_required_context(task_node.description)
        action_items = get_action_items(task_node.description, task_node.context)
    else:
        # For sub-action items, retrieve best practices and evaluate complexity and importance
        best_practices = get_best_practices(task_node.description)
        complexity, importance = evaluate_complexity_and_importance(task_node.description, best_practices)
        
        if should_seek_user_intervention(complexity, importance, best_practices):
            decision = seek_user_decision(task_node.description, best_practices)
            handle_user_decision(decision, task_node)
            if task_node.result.startswith("Task aborted"):
                return  # Abort further processing for this sub-action item
        else:
            task_node.context = get_best_practices(task_node.description)
    
    if task_node.result == "":
        if task_node.depth >= MAX_DEPTH:
            task_node.result = execute_task(task_node.description, task_node.context)
            return

        action_items = get_action_items(task_node.description, task_node.context)

        if action_items:
            interconnections = highlight_interconnections(action_items, task_node.context)
            task_node.interconnections = interconnections

            for action_item_desc in action_items:
                sub_task_node = TaskNode(action_item_desc, depth=task_node.depth + 1)
                task_node.sub_tasks.append(sub_task_node)
                process_task(sub_task_node)
            task_node.result = integrate_action_items(task_node)
        else:
            task_node.result = execute_task(task_node.description, task_node.context)

Workflow Summary

  1. Best Practices Retrieval: For each sub-action item, the system queries the LLM for relevant industry best practices to inform task execution.
  2. Complexity and Importance Evaluation: The system assesses each sub-action item's complexity and importance based on its description and the retrieved best practices.
  3. Risk Analysis: Evaluates potential risks or critical design choices derived from best practices that could have significant consequences.
  4. Dynamic Decision Waypoint Triggering: If thresholds for complexity or importance are exceeded, or if significant risks are identified, the system dynamically determines whether to seek user intervention based on contextual analysis.
  5. User Consultation: Depending on the dynamically determined criteria, the system may prompt the user to provide additional instructions, proceed autonomously, or abort the sub-action item.
  6. Adaptive Processing: Based on user decisions, the system adjusts its processing strategy to align with user preferences and sub-action item requirements.
  7. Continued Task Processing: If not aborted, the system continues to decompose and process further sub-action items, integrating results and maintaining coherence.

Example Scenario with Enhanced Decision Waypoints

User Input:

user_input = """
Develop a scalable e-commerce platform with integrated payment and inventory management systems.
"""

Processing Steps:

  1. Main Task Processing:

    • Description: "Develop a scalable e-commerce platform with integrated payment and inventory management systems."
    • Context Retrieval: The system retrieves the necessary context to understand the scope and requirements of the main task.
    • Action Items Generation: The system decomposes the main task into the following action items:
      1. Implement secure payment gateways.
      2. Design scalable architecture.
      3. Ensure data consistency in inventory management.
      4. Comply with financial regulations.
      5. Incorporate automated testing and continuous integration.
  2. Sub-Action Item Processing:

    For each sub-action item, the system performs the following:

    • Sub-Action Item 1: "Implement secure payment gateways."

      • Best Practices Retrieval: The system queries the LLM for best practices related to implementing secure payment gateways.
      • Complexity and Importance Evaluation:
        • Complexity: 8
        • Importance: 9
      • Risk Analysis: Identifies potential risks such as security vulnerabilities and compliance with PCI DSS.
      • Dynamic Decision Waypoint Triggered: Since both complexity and importance exceed the thresholds, and significant risks are identified, the system dynamically determines whether to seek user intervention.
        • Picky Level Determination:

          • Prompt Sent to LLM:

            def determine_picky_level(complexity, importance, best_practices):
                prompt = f"""
            Given the task's complexity of {complexity} and importance of {importance}, along with the following best practices, determine the appropriate picky level on a scale of 1 to 10. A higher picky level means the system is more inclined to seek user intervention.
            
            Task Complexity: {complexity}
            Task Importance: {importance}
            
            Best Practices:
            {best_practices}
            
            Determine the Picky Level (1-10):
            """
                picky_level = run_ollama_prompt(prompt).strip()
                try:
                    picky_level = int(picky_level)
                    picky_level = max(1, min(picky_level, 10))  # Ensures level is between 1 and 10
                except ValueError:
                    picky_level = 5  # Default value if parsing fails
                return picky_level
          • LLM Response: Suppose the LLM determines a picky level of 8.

        • User Intervention Decision:

          • Picky Level: 8

          • Prompt Sent to LLM:

            prompt = f"""

          Given the task's complexity of {complexity} and importance of {importance}, and the determined picky level of {picky_level}, should the system seek user intervention? Respond with "Yes" or "No".

          Task Complexity: {complexity}
          Task Importance: {importance}
          Picky Level: {picky_level}

          Response:
          """
          decision = run_ollama_prompt(prompt).strip().lower()
          ```

          • LLM Decision: The LLM responds with "Yes," indicating that user intervention is necessary.
        • System Prompt to User:

          The task below has been evaluated for complexity and importance based on the provided best practices. It may involve significant risks or critical design choices with substantial consequences.
          
          Task:
          Implement secure payment gateways.
          
          Best Practices:
          - Utilize encryption protocols like SSL/TLS to protect data in transit.
          - Implement fraud detection mechanisms to identify and prevent fraudulent transactions.
          - Ensure compliance with PCI DSS standards for handling payment information.
          - Regularly update and patch payment gateway software to address security vulnerabilities.
          
          Please choose how to proceed:
          1. Provide additional instructions or preferences to guide the task execution.
          2. Allow the system to proceed based on the current best practices.
          3. Abort the task due to identified risks.
          
        • User Decision: Suppose the user selects option 2, allowing the system to proceed based on the current best practices.

        • Processing Based on Decision: The system proceeds to implement secure payment gateways following the retrieved best practices.

    • Sub-Action Item 2: "Design scalable architecture."

      • Best Practices Retrieval: The system queries the LLM for best practices related to designing scalable architectures.
      • Complexity and Importance Evaluation:
        • Complexity: 9
        • Importance: 8
      • Risk Analysis: Identifies potential scalability challenges and the need for efficient load balancing.
      • Dynamic Decision Waypoint Triggered: Both complexity and importance exceed the thresholds, prompting user consultation.
        • Picky Level Determination:

          • Prompt Sent to LLM:

            picky_level = determine_picky_level(complexity, importance, best_practices)
          • LLM Response: Suppose the LLM determines a picky level of 7.

        • User Intervention Decision:

          • Picky Level: 7

          • Prompt Sent to LLM:

            prompt = f"""

          Given the task's complexity of {complexity} and importance of {importance}, and the determined picky level of {picky_level}, should the system seek user intervention? Respond with "Yes" or "No".

          Task Complexity: {complexity}
          Task Importance: {importance}
          Picky Level: {picky_level}

          Response:
          """
          decision = run_ollama_prompt(prompt).strip().lower()
          ```

          • LLM Decision: The LLM responds with "Yes," indicating that user intervention is necessary.
        • System Prompt to User:

          The task below has been evaluated for complexity and importance based on the provided best practices. It may involve significant risks or critical design choices with substantial consequences.
          
          Task:
          Design scalable architecture.
          
          Best Practices:
          - Utilize microservices to handle different components independently.
          - Implement load balancers to distribute incoming traffic efficiently.
          - Use containerization technologies like Docker and orchestration tools like Kubernetes.
          - Ensure horizontal scalability to handle increased loads without significant downtime.
          
          Please choose how to proceed:
          1. Provide additional instructions or preferences to guide the task execution.
          2. Allow the system to proceed based on the current best practices.
          3. Abort the task due to identified risks.
          
        • User Decision: Suppose the user selects option 1 and provides further instructions:

          Focus on integrating the architecture with cloud services to leverage auto-scaling features and ensure high availability.
          
        • Updated Task Description: The system appends the clarification to the original sub-action item description and retrieves the necessary context.

        • Processing Based on Clarification: The system designs a scalable architecture that integrates with cloud services, leveraging auto-scaling and ensuring high availability.

    • Sub-Action Item 3: "Ensure data consistency in inventory management."

      • Best Practices Retrieval: The system queries the LLM for best practices related to ensuring data consistency in inventory management.
      • Complexity and Importance Evaluation:
        • Complexity: 6
        • Importance: 7
      • Risk Analysis: Identifies potential data synchronization issues and the need for reliable database solutions.
      • Dynamic Decision Waypoint Triggered: Since complexity is moderate and importance meets the threshold, the system assesses whether to seek user input based on the dynamically determined picky level.
        • Picky Level Determination:

          • Prompt Sent to LLM:

            picky_level = determine_picky_level(complexity, importance, best_practices)
          • LLM Response: Suppose the LLM determines a picky level of 5.

        • User Intervention Decision:

          • Picky Level: 5

          • Prompt Sent to LLM:

            prompt = f"""

          Given the task's complexity of {complexity} and importance of {importance}, and the determined picky level of {picky_level}, should the system seek user intervention? Respond with "Yes" or "No".

          Task Complexity: {complexity}
          Task Importance: {importance}
          Picky Level: {picky_level}

          Response:
          """
          decision = run_ollama_prompt(prompt).strip().lower()
          ```

          • LLM Decision: The LLM responds with "No," indicating that user intervention is not necessary.
        • Processing Based on LLM Decision: The system proceeds to ensure data consistency in inventory management following the best practices without seeking further user input.

    • Sub-Action Items 4 and 5: Similar processing occurs for the remaining sub-action items, with decision waypoints evaluating their complexity and importance, and user consultation as needed based on the dynamically determined picky level.

Conclusion:

By allowing the LLM to dynamically determine the picky level, the system enhances its adaptability and responsiveness to varying task complexities and importance levels. This approach ensures that user intervention is sought judiciously, maintaining an optimal balance between system autonomy and necessary oversight. The integration of dynamically determined decision waypoints facilitates more intelligent and context-aware task processing, aligning execution strategies with both best practices and user-specific requirements.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant