Skip to content

Commit

Permalink
Update default_settings.toml with lag set to 5
Browse files Browse the repository at this point in the history
  • Loading branch information
mraniki committed Sep 26, 2023
1 parent 553e6fd commit 737acb4
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 3 deletions.
3 changes: 1 addition & 2 deletions myllm/default_settings.toml
Original file line number Diff line number Diff line change
Expand Up @@ -26,15 +26,14 @@ llm_model = "gpt-3.5-turbo"
# Refer to https://github.com/xtekky/gpt4free
# for the list of supported provider
llm_provider = "g4f.Provider.Bing"
lag = 0
lag = 5
# Number of conversation history
# between user and ai
max_memory = 100

# help message listing the commands
# available
llm_commands = """
🦾 /qq\n
💬 /ai\n
➰ /aimode\n
🧽 /clearai\n
Expand Down
2 changes: 1 addition & 1 deletion myllm/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ async def chat(self, prompt):
return f"{settings.llm_prefix} {response}"
except Exception as error:
logger.error("No response from the model {}", error)
return "No response from the model"
return f"{settings.llm_prefix} No response from the model"

Check warning on line 101 in myllm/main.py

View check run for this annotation

Codecov / codecov/patch

myllm/main.py#L101

Added line #L101 was not covered by tests

async def clear_chat_history(self):
"""
Expand Down

0 comments on commit 737acb4

Please sign in to comment.