-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
store model to vscode ml mappings #596
Conversation
@@ -157,7 +157,6 @@ export class NodeHost implements RuntimeHost { | |||
tok.token = "Bearer " + this._azureToken | |||
} | |||
if (!tok && this.clientLanguageModel) { | |||
logVerbose(`model: using client language model`) | |||
return <LanguageModelConfiguration>{ | |||
model: modelId, | |||
provider: this.clientLanguageModel.id, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The log statement logVerbose('model: using client language model')
has been removed. This could lead to lack of debugging information when troubleshooting issues related to the client language model. Consider adding it back or replacing it with a more appropriate log statement. 😊
generated by pr-review-commit
missing_log
The changes look good overall with improvements such as better log messages and updates to the configuration handling. However, I have one concern:
Suggested improvement: + private chatModelUpdates: Record<string, string> = {};
async updateLanguageChatModels(model: string, chatModel: string) {
+ this.chatModelUpdates[model] = chatModel;
+ }
+ async flushChatModelUpdates() {
+ const config = vscode.workspace.getConfiguration(TOOL_ID);
+ const res = await this.languageChatModels();
+ Object.assign(res, this.chatModelUpdates);
+ await config.update("languageChatModels", res);
+ this.chatModelUpdates = {};
+ } With this change, the config is not constantly written to disk. Instead, changes are stored in Other than this concern, LGTM 🚀
|
@@ -157,7 +157,6 @@ export class NodeHost implements RuntimeHost { | |||
tok.token = "Bearer " + this._azureToken |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The variable tok
is assigned but its value is never used. Consider removing it if it's not needed. 🧹
generated by pr-review-commit
unused_variable
GenAIScript will be able to leverage those [language models](https://code.visualstudio.com/api/extension-guides/language-model) as well. | ||
|
||
This mode is useful to run your scripts without having a separate LLM provider or local LLMs. However, those models are not available from the command line | ||
and have additional limitations and rate limiting defined by the GitHub Copilot platform. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The word "authorizion" should be corrected to "authorization".
generated by pr-docs-review-commit
typo
If you have access to **GitHub Copilot in Visual Studio Code**, | ||
GenAIScript will be able to leverage those [language models](https://code.visualstudio.com/api/extension-guides/language-model) as well. | ||
|
||
This mode is useful to run your scripts without having a separate LLM provider or local LLMs. However, those models are not available from the command line |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The phrase "those models are not available from the command line" should be corrected to "those models are not available from the command line".
generated by pr-docs-review-commit
grammar
|
||
<ol> | ||
|
||
<li>run your script</li> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The phrase "run your script" should be in imperative mood, suggesting "Run your script".
generated by pr-docs-review-commit
grammar
@@ -10,7 +10,7 @@ GenAIScript will try to find the connection token from various sources: | |||
|
|||
- a `.env` file in the root of your project (VSCode and CLI) | |||
- environment variables, typically within your CI/CD environment (CLI only) | |||
- Visual Studio Language Models (VSCode only) | |||
- Visual Studio Language Chat Models (VSCode only) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The phrase "Visual Studio Language Chat Models" should be corrected to "Visual Studio Code Language Models" for consistency and accuracy.
generated by pr-docs-review-commit
typo
else res[model] = chatModel | ||
const config = vscode.workspace.getConfiguration(TOOL_ID) | ||
await config.update("languageChatModels", res) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no error handling for the async function updateLanguageChatModels
. If an error occurs during the execution of this function, it might lead to unexpected behavior. Consider adding a try-catch block. 🐞
generated by pr-review-commit
missing_error_handling
The
GIT_DIFF
includes the following significant changes in the codebase:Renaming: The term "Visual Studio Language Models" is changed to "Visual Studio Language Chat Models" both in token.md documentation and several code files. It seems the intention is to clarify which kind of models we are dealing with - chat models.
Cleanup: There is an unnecessary logVerbose call that's removed, thoroughly cleaning the VERBOSE Logger from the NodeHost file.
Logging: A change to include a more verbose log string is made in the server.ts file. It now outputs chat model details to provide better insights whenever it runs.
Enhancement: A major update in the settings.json file under the sample package, a new property named "genaiscript.languageChatModels" is added. This looks like a way to map GenAIScript models (like "openai:gpt-4") to Visual Studio Code Language Chat Models.
Configuration: The package.json for the vscode package reflects this same new property "genaiscript.languageChatModels".
The lmaccess file, which seems to deal with language models, has been substantially refactored. The refactoring includes changes to the model selection and configuration.
The ExtensionState class in the state.ts file is updated to deal with language chat models. However, the public API seems to remain untouched.