Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

store model to vscode ml mappings #596

Merged
merged 8 commits into from
Jul 31, 2024
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added docs/src/assets/vscode-language-models.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
28 changes: 28 additions & 0 deletions docs/src/content/docs/getting-started/configuration.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -269,6 +269,34 @@ script({

</Steps>

## GitHub Copilot Models

If you have access to **GitHub Copilot in Visual Studio Code**,
GenAIScript will be able to leverage those [language models](https://code.visualstudio.com/api/extension-guides/language-model) as well.

This mode is useful to run your scripts without having a separate LLM provider or local LLMs. However, those models are not available from the command line
pelikhan marked this conversation as resolved.
Show resolved Hide resolved

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The phrase "those models are not available from the command line" should be corrected to "those models are not available from the command line".

generated by pr-docs-review-commit grammar

and have additional limitations and rate limiting defined by the GitHub Copilot platform.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The word "authorizion" should be corrected to "authorization".

generated by pr-docs-review-commit typo


There is no configuration needed as long as you have GitHub Copilot installed and configured in Visual Studio Code.

<Steps>

<ol>

<li>run your script</li>

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The phrase "run your script" should be in imperative mood, suggesting "Run your script".

generated by pr-docs-review-commit grammar

<li>
select the **Visual Studio Code Language Chat Models** option when configuring the model
pelikhan marked this conversation as resolved.
Show resolved Hide resolved
pelikhan marked this conversation as resolved.
Show resolved Hide resolved
</li>
<li>
select the best chat model that matches the one you have in your script
pelikhan marked this conversation as resolved.
Show resolved Hide resolved
</li>

</ol>

</Steps>

The mapping of GenAIScript model names to Visual Studio Models is stored in the settings.
pelikhan marked this conversation as resolved.
Show resolved Hide resolved
pelikhan marked this conversation as resolved.
Show resolved Hide resolved

## Local Models

There are many projects that allow you to run models locally on your machine,
Expand Down
2 changes: 1 addition & 1 deletion docs/src/content/docs/reference/token.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ GenAIScript will try to find the connection token from various sources:

- a `.env` file in the root of your project (VSCode and CLI)
- environment variables, typically within your CI/CD environment (CLI only)
- Visual Studio Language Models (VSCode only)
- Visual Studio Language Chat Models (VSCode only)
pelikhan marked this conversation as resolved.
Show resolved Hide resolved
pelikhan marked this conversation as resolved.
Show resolved Hide resolved
pelikhan marked this conversation as resolved.
Show resolved Hide resolved
pelikhan marked this conversation as resolved.
Show resolved Hide resolved
pelikhan marked this conversation as resolved.
Show resolved Hide resolved

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The phrase "Visual Studio Language Chat Models" should be corrected to "Visual Studio Code Language Models" for consistency and accuracy.

generated by pr-docs-review-commit typo


## .env file or process environment

Expand Down
1 change: 0 additions & 1 deletion packages/cli/src/nodehost.ts
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,6 @@ export class NodeHost implements RuntimeHost {
tok.token = "Bearer " + this._azureToken

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variable tok is assigned but its value is never used. Consider removing it if it's not needed. 🧹

generated by pr-review-commit unused_variable

pelikhan marked this conversation as resolved.
Show resolved Hide resolved
}
if (!tok && this.clientLanguageModel) {
logVerbose(`model: using client language model`)
return <LanguageModelConfiguration>{
model: modelId,
provider: this.clientLanguageModel.id,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The log statement logVerbose('model: using client language model') has been removed. This could lead to lack of debugging information when troubleshooting issues related to the client language model. Consider adding it back or replacing it with a more appropriate log statement. 😊

generated by pr-review-commit missing_log

Expand Down
5 changes: 3 additions & 2 deletions packages/cli/src/server.ts
Original file line number Diff line number Diff line change
Expand Up @@ -107,8 +107,9 @@ export async function startServer(options: { port: string }) {
// add handler
const chatId = randomHex(6)
chats[chatId] = async (chunk) => {
if (!responseSoFar) {
trace.itemValue("model", chunk.model)
if (!responseSoFar && chunk.model) {
logVerbose(`visual studio: chat model ${chunk.model}`)
trace.itemValue("chat model", chunk.model)
trace.appendContent("\n\n")
}
trace.appendToken(chunk.chunk)
Expand Down
4 changes: 3 additions & 1 deletion packages/sample/.vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,7 @@
"openai",
"outputfilename"
],
"genaiscript.cli.path": "../cli/built/genaiscript.cjs"
"genaiscript.cli.path": "../cli/built/genaiscript.cjs",
"genaiscript.languageChatModels": {
}
}
4 changes: 4 additions & 0 deletions packages/vscode/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -234,6 +234,10 @@
{
"title": "GenAIScript",
"properties": {
"genaiscript.languageChatModels": {
"type": "object",
"description": "Mapping from GenAIScript model (openai:gpt-4) to Visual Studio Code Language Chat Model (github...)"
},
"genaiscript.diagnostics": {
"type": "boolean",
"default": false,
Expand Down
19 changes: 9 additions & 10 deletions packages/vscode/src/lmaccess.ts
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
/* eslint-disable @typescript-eslint/naming-convention */
import * as vscode from "vscode"
import { AIRequestOptions, ExtensionState } from "./state"
import { isApiProposalEnabled } from "./proposals"
import { LanguageModel } from "../../core/src/chat"
import { ExtensionState } from "./state"
import {
MODEL_PROVIDER_OLLAMA,
MODEL_PROVIDER_LLAMAFILE,
Expand Down Expand Up @@ -34,7 +32,8 @@ async function generateLanguageModelConfiguration(
return { provider }
}

if (Object.keys(state.languageChatModels).length)
const languageChatModels = await state.languageChatModels()
if (Object.keys(languageChatModels).length)
return { provider: MODEL_PROVIDER_CLIENT, model: "*" }
pelikhan marked this conversation as resolved.
Show resolved Hide resolved

const items: (vscode.QuickPickItem & {
Expand All @@ -46,8 +45,8 @@ async function generateLanguageModelConfiguration(
const models = await vscode.lm.selectChatModels()
if (models.length)
items.push({
label: "Visual Studio Language Models",
detail: `Use a registered Language Model (e.g. GitHub Copilot).`,
label: "Visual Studio Language Chat Models",
detail: `Use a registered LLM such as GitHub Copilot.`,
model: "*",
provider: MODEL_PROVIDER_CLIENT,
})
Expand Down Expand Up @@ -104,8 +103,8 @@ async function pickChatModel(
model: string
): Promise<vscode.LanguageModelChat> {
const chatModels = await vscode.lm.selectChatModels()

const chatModelId = state.languageChatModels[model]
const languageChatModels = await state.languageChatModels()
pelikhan marked this conversation as resolved.
Show resolved Hide resolved
const chatModelId = languageChatModels[model]
pelikhan marked this conversation as resolved.
Show resolved Hide resolved
let chatModel = chatModelId && chatModels.find((m) => m.id === chatModelId)
if (!chatModel) {
const items: (vscode.QuickPickItem & {
Expand All @@ -117,10 +116,10 @@ async function pickChatModel(
chatModel,
}))
const res = await vscode.window.showQuickPick(items, {
title: `Pick a Chat Model for ${model}`,
title: `Pick a Language Chat Model for ${model}`,
})
chatModel = res?.chatModel
if (chatModel) state.languageChatModels[model] = chatModel.id
if (chatModel) await state.updateLanguageChatModels(model, chatModel.id)
}
return chatModel
}
Expand Down
2 changes: 1 addition & 1 deletion packages/vscode/src/servermanager.ts
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ export class TerminalServerManager implements ServerManager {
)
subscriptions.push(
vscode.workspace.onDidChangeConfiguration((e) => {
if (e.affectsConfiguration(TOOL_ID)) this.close()
if (e.affectsConfiguration(TOOL_ID + ".cli")) this.close()
})
)

Expand Down
31 changes: 20 additions & 11 deletions packages/vscode/src/state.ts
Original file line number Diff line number Diff line change
Expand Up @@ -107,8 +107,6 @@ export class ExtensionState extends EventTarget {
AIRequestSnapshot
> = undefined
readonly output: vscode.LogOutputChannel
// modelid -> vscode language mode id
languageChatModels: Record<string, string> = {}

constructor(public readonly context: ExtensionContext) {
super()
Expand Down Expand Up @@ -138,15 +136,26 @@ export class ExtensionState extends EventTarget {
subscriptions
)
)
if (
typeof vscode.lm !== "undefined" &&
typeof vscode.lm.onDidChangeChatModels === "function"
)
subscriptions.push(
vscode.lm.onDidChangeChatModels(
() => (this.languageChatModels = {})
)
)
}

async updateLanguageChatModels(model: string, chatModel: string) {
pelikhan marked this conversation as resolved.
Show resolved Hide resolved
pelikhan marked this conversation as resolved.
Show resolved Hide resolved
pelikhan marked this conversation as resolved.
Show resolved Hide resolved
const res = await this.languageChatModels()
if (res[model] !== chatModel) {
if (chatModel === undefined) delete res[model]
else res[model] = chatModel
const config = vscode.workspace.getConfiguration(TOOL_ID)
await config.update("languageChatModels", res)
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no error handling for the async function updateLanguageChatModels. If an error occurs during the execution of this function, it might lead to unexpected behavior. Consider adding a try-catch block. 🐞

generated by pr-review-commit missing_error_handling

}
pelikhan marked this conversation as resolved.
Show resolved Hide resolved

async languageChatModels() {
const config = vscode.workspace.getConfiguration(TOOL_ID)
const res =
((await config.get("languageChatModels")) as Record<
string,
string
>) || {}
return res
pelikhan marked this conversation as resolved.
Show resolved Hide resolved
pelikhan marked this conversation as resolved.
Show resolved Hide resolved
}

private async saveScripts() {
Expand Down