Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cache #636

Closed
wants to merge 1 commit into from
Closed

cache #636

wants to merge 1 commit into from

Conversation

pelikhan
Copy link
Member

@pelikhan pelikhan commented Aug 20, 2024

  • 🎯 Removed name parameter from getChatCompletionCache(). Now it always uses CHAT_CACHE. The public API is simplified! 🧪 (TO CHECK: does this change affect any users?)
  • 🔄 Updated OpenAIChatCompletion() function to adapt to changes in getChatCompletionCache(), eliminated cacheName from its body. It has fewer parameters now, which may improve usage! 🪄
  • 📝 Added a logging function logError(e) in runTemplate(). If an error occurs, it doesn't just remain silent, instead logs it! 🔍
  • 🛠 Adjusted PromptArgs type in the public API by omitting cacheName from PromptScript. This makes the API less cluttered! 😺 (TO CHECK: does this change affect any users?)

These code changes tend to streamline function calls, enhance error logging, and simplify the public API. All these modifications lead to more maintainable code and potentially improve user experience with the API.

generated by pr-describe

export function getChatCompletionCache(
name?: string
): ChatCompletationRequestCache {
export function getChatCompletionCache(): ChatCompletationRequestCache {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function getChatCompletionCache has been modified to no longer accept an argument. This could potentially break any existing calls to this function that are passing an argument.

generated by pr-review-commit missing_argument

@@ -68,7 +67,7 @@ export const OpenAIChatCompletion: ChatCompletionHandler = async (
const { model } = parseModelIdentifier(req.model)
const encoder = await resolveTokenEncoder(model)

const cache = getChatCompletionCache(cacheName)
const cache = getChatCompletionCache()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function getChatCompletionCache is called without an argument. This could potentially break the functionality if the function was expected to handle different cache names.

generated by pr-review-commit missing_argument

@@ -327,6 +327,7 @@ export async function runTemplate(
if (oannotations) annotations = oannotations.slice(0)
}
} catch (e) {
logError(e)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An error is logged but not handled. This could lead to unexpected behavior or crashes. Consider throwing the error after logging, or handle it appropriately.

generated by pr-review-commit error_handling

@pelikhan pelikhan closed this Aug 21, 2024
@pelikhan pelikhan reopened this Aug 21, 2024
@pelikhan pelikhan changed the title write trace to file as it comes cache Aug 21, 2024
@@ -68,7 +67,7 @@
const { model } = parseModelIdentifier(req.model)
const encoder = await resolveTokenEncoder(model)

const cache = getChatCompletionCache(cacheName)
const cache = getChatCompletionCache()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function call getChatCompletionCache() has been modified to no longer pass any arguments. This change is consistent with the modification of the getChatCompletionCache function, but it could potentially affect the behavior of the OpenAIChatCompletion function.

generated by pr-review-commit changed_function_call

@@ -618,7 +618,7 @@ interface ExpansionVariables {

type MakeOptional<T, P extends keyof T> = Partial<Pick<T, P>> & Omit<T, P>

type PromptArgs = Omit<PromptScript, "text" | "id" | "jsSource" | "activation">
type PromptArgs = Omit<PromptScript, "text" | "id" | "jsSource" | "activation", "cacheName">

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The type PromptArgs has been modified to omit the cacheName property from PromptScript. This could potentially break any existing code that relies on cacheName being a property of PromptArgs.

generated by pr-review-commit changed_type

Copy link

The pull request primarily involves the following changes:

  1. The getChatCompletionCache function now does not take any arguments. Instead of optionally using a name argument to specify the cache, it always uses CHAT_CACHE.
  2. Correspondingly, calls to getChatCompletionCache in openai.ts and promptrunner.ts have been updated to not pass any arguments.
  3. The logError function is now used to log an error in runTemplate when the output processor fails.
  4. In prompt_template.d.ts, the type PromptArgs does not omit "cacheName" anymore.

These changes seem to be aimed at simplifying the usage of the chat cache and improving error handling in the template runner. As long as the CHAT_CACHE is correctly defined and accessible in the scope where getChatCompletionCache is defined, these changes are acceptable.

One potential functional issue could be if different parts of the code were relying on creating separate chat caches by using getChatCompletionCache with different name arguments. Since the function no longer takes arguments, all parts of the code would now be using the same chat cache. If this is undesired, it could lead to conflicts or bugs.

In summary, LGTM 🚀, unless separate chat caches were needed in different parts of the code. If that's the case, I would suggest reconsidering the change to getChatCompletionCache.

generated by pr-review

@pelikhan pelikhan closed this Aug 21, 2024
@pelikhan pelikhan deleted the cache-tests branch September 5, 2024 12:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant