Skip to content

Commit

Permalink
store model to vscode ml mappings (#596)
Browse files Browse the repository at this point in the history
* store vscode language mapping in project configuration

* clean imports

* added docs

* add docs

* fix

* lazy update

* review

* more docs
  • Loading branch information
pelikhan authored Jul 31, 2024
1 parent 9ede8b6 commit 0a7ddf4
Show file tree
Hide file tree
Showing 18 changed files with 85 additions and 34 deletions.
Binary file removed docs/src/assets/vscode-insiders.png
Binary file not shown.
1 change: 0 additions & 1 deletion docs/src/assets/vscode-insiders.png.txt

This file was deleted.

Binary file added docs/src/assets/vscode-language-models-select.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/src/assets/vscode-language-models-select.png.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
A dropdown menu titled 'Pick a Language Chat Model for openai:gpt-4' with several options including 'GPT 3.5 Turbo', 'GPT 4', 'GPT 4 Turbo (2024-01-25 Preview)', and 'GPT 4o (2024-05-13)', with 'GPT 3.5 Turbo' currently highlighted.
Binary file added docs/src/assets/vscode-language-models.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/src/assets/vscode-language-models.png.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Screenshot of a Visual Studio Code interface showing a dropdown menu with options to configure a language model for OpenAI's GPT-4, including an option for Visual Studio Language Chat Models and using a registered LLM such as GitHub Copilot.
Binary file removed docs/src/assets/vscode-select-llm.png
Binary file not shown.
1 change: 0 additions & 1 deletion docs/src/assets/vscode-select-llm.png.txt

This file was deleted.

44 changes: 40 additions & 4 deletions docs/src/content/docs/getting-started/configuration.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,11 @@ import { Steps } from "@astrojs/starlight/components"
import { Tabs, TabItem } from "@astrojs/starlight/components"
import { Image } from "astro:assets"

import insidersSrc from "../../../assets/vscode-insiders.png"
import insidersAlt from "../../../assets/vscode-insiders.png.txt?raw"
import lmSrc from "../../../assets/vscode-language-models.png"
import lmAlt from "../../../assets/vscode-language-models.png.txt?raw"

import selectLLMSrc from "../../../assets/vscode-select-llm.png"
import selectLLMAlt from "../../../assets/vscode-select-llm.png.txt?raw"
import lmSelectSrc from "../../../assets/vscode-language-models-select.png"
import lmSelectAlt from "../../../assets/vscode-language-models-select.png.txt?raw"

You will need to configure the LLM connection and authorizion secrets.

Expand Down Expand Up @@ -269,6 +269,42 @@ script({

</Steps>

## GitHub Copilot Models

If you have access to **GitHub Copilot in Visual Studio Code**,
GenAIScript will be able to leverage those [language models](https://code.visualstudio.com/api/extension-guides/language-model) as well.

This mode is useful to run your scripts without having a separate LLM provider or local LLMs. However, those models are not available from the command line
and have additional limitations and rate limiting defined by the GitHub Copilot platform.

There is no configuration needed as long as you have GitHub Copilot installed and configured in Visual Studio Code.

<Steps>

<ol>

<li>run your script</li>
<li>
select the **Visual Studio Code Language Chat Models** option when configuring the model

<Image src={lmSrc} alt={lmAlt} loading="lazy" />

(This step is skipped if you already have mappings in your settings)

</li>
<li>
select the best chat model that matches the one you have in your script

<Image src={lmSelectSrc} alt={lmSelectAlt} loading="lazy" />

</li>

</ol>

</Steps>

The mapping of GenAIScript model names to Visual Studio Models is stored in the settings.

## Local Models

There are many projects that allow you to run models locally on your machine,
Expand Down
2 changes: 1 addition & 1 deletion docs/src/content/docs/reference/token.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ GenAIScript will try to find the connection token from various sources:

- a `.env` file in the root of your project (VSCode and CLI)
- environment variables, typically within your CI/CD environment (CLI only)
- Visual Studio Language Models (VSCode only)
- Visual Studio Language Chat Models (VSCode only)

## .env file or process environment

Expand Down
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@
"gen:licenses": "npx --yes generate-license-file --input ./package.json --output ./THIRD_PARTY_LICENSES.md --overwrite",
"genai:technical": "cd docs && yarn genai:technical",
"genai:frontmatter": "cd docs && yarn genai:frontmatter",
"genai:alt": "cd docs && yarn genai:alt",
"genai:alt": "cd docs && yarn genai:alt-text",
"genai:test": "node packages/cli/built/genaiscript.cjs run test-gen"
},
"release-it": {
Expand Down
1 change: 0 additions & 1 deletion packages/cli/src/nodehost.ts
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,6 @@ export class NodeHost implements RuntimeHost {
tok.token = "Bearer " + this._azureToken
}
if (!tok && this.clientLanguageModel) {
logVerbose(`model: using client language model`)
return <LanguageModelConfiguration>{
model: modelId,
provider: this.clientLanguageModel.id,
Expand Down
5 changes: 3 additions & 2 deletions packages/cli/src/server.ts
Original file line number Diff line number Diff line change
Expand Up @@ -107,8 +107,9 @@ export async function startServer(options: { port: string }) {
// add handler
const chatId = randomHex(6)
chats[chatId] = async (chunk) => {
if (!responseSoFar) {
trace.itemValue("model", chunk.model)
if (!responseSoFar && chunk.model) {
logVerbose(`visual studio: chat model ${chunk.model}`)
trace.itemValue("chat model", chunk.model)
trace.appendContent("\n\n")
}
trace.appendToken(chunk.chunk)
Expand Down
5 changes: 4 additions & 1 deletion packages/sample/.vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,8 @@
"openai",
"outputfilename"
],
"genaiscript.cli.path": "../cli/built/genaiscript.cjs"
"genaiscript.cli.path": "../cli/built/genaiscript.cjs",
"genaiscript.languageChatModels": {
"openai:gpt-4": "github.copilot-chat/4/gpt-4o-2024-05-13"
}
}
4 changes: 4 additions & 0 deletions packages/vscode/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -234,6 +234,10 @@
{
"title": "GenAIScript",
"properties": {
"genaiscript.languageChatModels": {
"type": "object",
"description": "Mapping from GenAIScript model (openai:gpt-4) to Visual Studio Code Language Chat Model (github...)"
},
"genaiscript.diagnostics": {
"type": "boolean",
"default": false,
Expand Down
19 changes: 9 additions & 10 deletions packages/vscode/src/lmaccess.ts
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
/* eslint-disable @typescript-eslint/naming-convention */
import * as vscode from "vscode"
import { AIRequestOptions, ExtensionState } from "./state"
import { isApiProposalEnabled } from "./proposals"
import { LanguageModel } from "../../core/src/chat"
import { ExtensionState } from "./state"
import {
MODEL_PROVIDER_OLLAMA,
MODEL_PROVIDER_LLAMAFILE,
Expand Down Expand Up @@ -34,7 +32,8 @@ async function generateLanguageModelConfiguration(
return { provider }
}

if (Object.keys(state.languageChatModels).length)
const languageChatModels = await state.languageChatModels()
if (Object.keys(languageChatModels).length)
return { provider: MODEL_PROVIDER_CLIENT, model: "*" }

const items: (vscode.QuickPickItem & {
Expand All @@ -46,8 +45,8 @@ async function generateLanguageModelConfiguration(
const models = await vscode.lm.selectChatModels()
if (models.length)
items.push({
label: "Visual Studio Language Models",
detail: `Use a registered Language Model (e.g. GitHub Copilot).`,
label: "Visual Studio Language Chat Models",
detail: `Use a registered LLM such as GitHub Copilot.`,
model: "*",
provider: MODEL_PROVIDER_CLIENT,
})
Expand Down Expand Up @@ -104,8 +103,8 @@ async function pickChatModel(
model: string
): Promise<vscode.LanguageModelChat> {
const chatModels = await vscode.lm.selectChatModels()

const chatModelId = state.languageChatModels[model]
const languageChatModels = await state.languageChatModels()
const chatModelId = languageChatModels[model]
let chatModel = chatModelId && chatModels.find((m) => m.id === chatModelId)
if (!chatModel) {
const items: (vscode.QuickPickItem & {
Expand All @@ -117,10 +116,10 @@ async function pickChatModel(
chatModel,
}))
const res = await vscode.window.showQuickPick(items, {
title: `Pick a Chat Model for ${model}`,
title: `Pick a Language Chat Model for ${model}`,
})
chatModel = res?.chatModel
if (chatModel) state.languageChatModels[model] = chatModel.id
if (chatModel) await state.updateLanguageChatModels(model, chatModel.id)
}
return chatModel
}
Expand Down
2 changes: 1 addition & 1 deletion packages/vscode/src/servermanager.ts
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ export class TerminalServerManager implements ServerManager {
)
subscriptions.push(
vscode.workspace.onDidChangeConfiguration((e) => {
if (e.affectsConfiguration(TOOL_ID)) this.close()
if (e.affectsConfiguration(TOOL_ID + ".cli")) this.close()
})
)

Expand Down
31 changes: 20 additions & 11 deletions packages/vscode/src/state.ts
Original file line number Diff line number Diff line change
Expand Up @@ -107,8 +107,6 @@ export class ExtensionState extends EventTarget {
AIRequestSnapshot
> = undefined
readonly output: vscode.LogOutputChannel
// modelid -> vscode language mode id
languageChatModels: Record<string, string> = {}

constructor(public readonly context: ExtensionContext) {
super()
Expand Down Expand Up @@ -138,15 +136,26 @@ export class ExtensionState extends EventTarget {
subscriptions
)
)
if (
typeof vscode.lm !== "undefined" &&
typeof vscode.lm.onDidChangeChatModels === "function"
)
subscriptions.push(
vscode.lm.onDidChangeChatModels(
() => (this.languageChatModels = {})
)
)
}

async updateLanguageChatModels(model: string, chatModel: string) {
const res = await this.languageChatModels()
if (res[model] !== chatModel) {
if (chatModel === undefined) delete res[model]
else res[model] = chatModel
const config = vscode.workspace.getConfiguration(TOOL_ID)
await config.update("languageChatModels", res)
}
}

async languageChatModels() {
const config = vscode.workspace.getConfiguration(TOOL_ID)
const res =
((await config.get("languageChatModels")) as Record<
string,
string
>) || {}
return res
}

private async saveScripts() {
Expand Down

0 comments on commit 0a7ddf4

Please sign in to comment.