Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

store model to vscode ml mappings #596

Merged
merged 8 commits into from
Jul 31, 2024
Merged

store model to vscode ml mappings #596

merged 8 commits into from
Jul 31, 2024

Conversation

pelikhan
Copy link
Member

@pelikhan pelikhan commented Jul 30, 2024

The GIT_DIFF includes the following significant changes in the codebase:

  • Renaming: The term "Visual Studio Language Models" is changed to "Visual Studio Language Chat Models" both in token.md documentation and several code files. It seems the intention is to clarify which kind of models we are dealing with - chat models.

  • Cleanup: There is an unnecessary logVerbose call that's removed, thoroughly cleaning the VERBOSE Logger from the NodeHost file.

  • Logging: A change to include a more verbose log string is made in the server.ts file. It now outputs chat model details to provide better insights whenever it runs.

  • Enhancement: A major update in the settings.json file under the sample package, a new property named "genaiscript.languageChatModels" is added. This looks like a way to map GenAIScript models (like "openai:gpt-4") to Visual Studio Code Language Chat Models.

  • Configuration: The package.json for the vscode package reflects this same new property "genaiscript.languageChatModels".

  • The lmaccess file, which seems to deal with language models, has been substantially refactored. The refactoring includes changes to the model selection and configuration.

  • The ExtensionState class in the state.ts file is updated to deal with language chat models. However, the public API seems to remain untouched.

generated by pr-describe

@@ -157,7 +157,6 @@ export class NodeHost implements RuntimeHost {
tok.token = "Bearer " + this._azureToken
}
if (!tok && this.clientLanguageModel) {
logVerbose(`model: using client language model`)
return <LanguageModelConfiguration>{
model: modelId,
provider: this.clientLanguageModel.id,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The log statement logVerbose('model: using client language model') has been removed. This could lead to lack of debugging information when troubleshooting issues related to the client language model. Consider adding it back or replacing it with a more appropriate log statement. 😊

generated by pr-review-commit missing_log

Copy link

The changes look good overall with improvements such as better log messages and updates to the configuration handling. However, I have one concern:

  1. In the file packages/vscode/src/state.ts, the method updateLanguageChatModels() updates the configuration directly. This might not be desirable depending on how often this method is called, as it can cause performance issues if the config is written to disk too frequently.

Suggested improvement:

+    private chatModelUpdates: Record<string, string> = {};

     async updateLanguageChatModels(model: string, chatModel: string) {
+        this.chatModelUpdates[model] = chatModel;
+    }

+    async flushChatModelUpdates() {
+        const config = vscode.workspace.getConfiguration(TOOL_ID);
+        const res = await this.languageChatModels();
+        Object.assign(res, this.chatModelUpdates);
+        await config.update("languageChatModels", res);
+        this.chatModelUpdates = {};
+    }

With this change, the config is not constantly written to disk. Instead, changes are stored in chatModelUpdates and only written to disk when flushChatModelUpdates() is called. Depending on your application's needs, you could call flushChatModelUpdates() at regular intervals or after a set number of updates to chatModelUpdates.

Other than this concern, LGTM 🚀

generated by pr-review

@@ -157,7 +157,6 @@ export class NodeHost implements RuntimeHost {
tok.token = "Bearer " + this._azureToken

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variable tok is assigned but its value is never used. Consider removing it if it's not needed. 🧹

generated by pr-review-commit unused_variable

GenAIScript will be able to leverage those [language models](https://code.visualstudio.com/api/extension-guides/language-model) as well.

This mode is useful to run your scripts without having a separate LLM provider or local LLMs. However, those models are not available from the command line
and have additional limitations and rate limiting defined by the GitHub Copilot platform.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The word "authorizion" should be corrected to "authorization".

generated by pr-docs-review-commit typo

If you have access to **GitHub Copilot in Visual Studio Code**,
GenAIScript will be able to leverage those [language models](https://code.visualstudio.com/api/extension-guides/language-model) as well.

This mode is useful to run your scripts without having a separate LLM provider or local LLMs. However, those models are not available from the command line

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The phrase "those models are not available from the command line" should be corrected to "those models are not available from the command line".

generated by pr-docs-review-commit grammar


<ol>

<li>run your script</li>

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The phrase "run your script" should be in imperative mood, suggesting "Run your script".

generated by pr-docs-review-commit grammar

@@ -10,7 +10,7 @@ GenAIScript will try to find the connection token from various sources:

- a `.env` file in the root of your project (VSCode and CLI)
- environment variables, typically within your CI/CD environment (CLI only)
- Visual Studio Language Models (VSCode only)
- Visual Studio Language Chat Models (VSCode only)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The phrase "Visual Studio Language Chat Models" should be corrected to "Visual Studio Code Language Models" for consistency and accuracy.

generated by pr-docs-review-commit typo

else res[model] = chatModel
const config = vscode.workspace.getConfiguration(TOOL_ID)
await config.update("languageChatModels", res)
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no error handling for the async function updateLanguageChatModels. If an error occurs during the execution of this function, it might lead to unexpected behavior. Consider adding a try-catch block. 🐞

generated by pr-review-commit missing_error_handling

@pelikhan pelikhan merged commit 0a7ddf4 into main Jul 31, 2024
9 checks passed
@pelikhan pelikhan deleted the lmconfig branch July 31, 2024 06:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant