-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cache #636
cache #636
Conversation
export function getChatCompletionCache( | ||
name?: string | ||
): ChatCompletationRequestCache { | ||
export function getChatCompletionCache(): ChatCompletationRequestCache { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function getChatCompletionCache
has been modified to no longer accept an argument. This could potentially break any existing calls to this function that are passing an argument.
generated by pr-review-commit
missing_argument
@@ -68,7 +67,7 @@ export const OpenAIChatCompletion: ChatCompletionHandler = async ( | |||
const { model } = parseModelIdentifier(req.model) | |||
const encoder = await resolveTokenEncoder(model) | |||
|
|||
const cache = getChatCompletionCache(cacheName) | |||
const cache = getChatCompletionCache() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function getChatCompletionCache
is called without an argument. This could potentially break the functionality if the function was expected to handle different cache names.
generated by pr-review-commit
missing_argument
@@ -327,6 +327,7 @@ export async function runTemplate( | |||
if (oannotations) annotations = oannotations.slice(0) | |||
} | |||
} catch (e) { | |||
logError(e) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An error is logged but not handled. This could lead to unexpected behavior or crashes. Consider throwing the error after logging, or handle it appropriately.
generated by pr-review-commit
error_handling
@@ -68,7 +67,7 @@ | |||
const { model } = parseModelIdentifier(req.model) | |||
const encoder = await resolveTokenEncoder(model) | |||
|
|||
const cache = getChatCompletionCache(cacheName) | |||
const cache = getChatCompletionCache() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function call getChatCompletionCache()
has been modified to no longer pass any arguments. This change is consistent with the modification of the getChatCompletionCache
function, but it could potentially affect the behavior of the OpenAIChatCompletion
function.
generated by pr-review-commit
changed_function_call
@@ -618,7 +618,7 @@ interface ExpansionVariables { | |||
|
|||
type MakeOptional<T, P extends keyof T> = Partial<Pick<T, P>> & Omit<T, P> | |||
|
|||
type PromptArgs = Omit<PromptScript, "text" | "id" | "jsSource" | "activation"> | |||
type PromptArgs = Omit<PromptScript, "text" | "id" | "jsSource" | "activation", "cacheName"> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The type PromptArgs
has been modified to omit the cacheName
property from PromptScript
. This could potentially break any existing code that relies on cacheName
being a property of PromptArgs
.
generated by pr-review-commit
changed_type
The pull request primarily involves the following changes:
These changes seem to be aimed at simplifying the usage of the chat cache and improving error handling in the template runner. As long as the One potential functional issue could be if different parts of the code were relying on creating separate chat caches by using In summary, LGTM 🚀, unless separate chat caches were needed in different parts of the code. If that's the case, I would suggest reconsidering the change to
|
name
parameter fromgetChatCompletionCache()
. Now it always usesCHAT_CACHE
. The public API is simplified! 🧪 (TO CHECK: does this change affect any users?)OpenAIChatCompletion()
function to adapt to changes ingetChatCompletionCache()
, eliminatedcacheName
from its body. It has fewer parameters now, which may improve usage! 🪄logError(e)
inrunTemplate()
. If an error occurs, it doesn't just remain silent, instead logs it! 🔍PromptArgs
type in the public API by omittingcacheName
fromPromptScript
. This makes the API less cluttered! 😺 (TO CHECK: does this change affect any users?)These code changes tend to streamline function calls, enhance error logging, and simplify the public API. All these modifications lead to more maintainable code and potentially improve user experience with the API.