Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add basic support for Google AI models #868

Merged
merged 4 commits into from
Nov 17, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
69 changes: 61 additions & 8 deletions docs/src/content/docs/getting-started/configuration.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ import lmSelectAlt from "../../../assets/vscode-language-models-select.png.txt?r
import oaiModelsSrc from "../../../assets/openai-model-names.png"
import oaiModelsAlt from "../../../assets/openai-model-names.png.txt?raw"


You will need to configure the LLM connection and authorization secrets.

:::tip
Expand Down Expand Up @@ -148,26 +147,25 @@ envFile: ~/.env.genaiscript

### No .env file

If you do not want to use a `.env` file, make sure to populate the environment variables
If you do not want to use a `.env` file, make sure to populate the environment variables
of the genaiscript process with the configuration values.

Here are some common examples:

- Using bash syntax
- Using bash syntax

```sh
OPENAI_API_KEY="value" npx --yes genaiscript run ...
```

- GitHub Action configuration
- GitHub Action configuration

```yaml title=".github/workflows/genaiscript.yml"
run: npx --yes genaiscript run ...
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: npx --yes genaiscript run ...
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
```


## OpenAI

This provider, `openai`, is the OpenAI chat model provider.
Expand Down Expand Up @@ -685,6 +683,61 @@ model3=key3
"
```

## Google AI

The `google` provider allows you to use Google AI models. It gives you access

:::note

GenAIScript uses the [OpenAI compatibility](https://ai.google.dev/gemini-api/docs/openai) layer of Google AI,
so some [limitations](https://ai.google.dev/gemini-api/docs/openai#current-limitations) apply.

:::

<Steps>

<ol>

<li>

Open [Google AI Studio](https://aistudio.google.com/app/apikey) and create a new API key.

</li>

<li>

Update the `.env` file with the API key.

```txt title=".env"
GOOGLE_API_KEY=...
```

</li>

<li>

Open [Google AI Studio](https://aistudio.google.com/prompts/new_chat) and select the model you want, then select **Get code** to extract the model identifier.

```py "gemini-1.5-pro-002"
...
const model = genAI.getGenerativeModel({
model: "gemini-1.5-pro-002",
});
...
```

then use the model identifier in your script.

```js "gemini-1.5-pro-002"
script({ model: "google:gemini-1.5-pro-002" })
```

</li>

</ol>
pelikhan marked this conversation as resolved.
Show resolved Hide resolved

</Steps>

## GitHub Copilot Chat Models <a id="github-copilot" href=""></a>

If you have access to **GitHub Copilot Chat in Visual Studio Code**,
Expand Down
1 change: 1 addition & 0 deletions docs/src/content/docs/reference/scripts/system.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3401,6 +3401,7 @@ defTool(
},
{
model: "vision",
cache: "vision_ask_image",
system: [
"system",
"system.assistant",
Expand Down
2 changes: 1 addition & 1 deletion packages/core/src/chat.ts
Original file line number Diff line number Diff line change
Expand Up @@ -818,7 +818,7 @@
topLogprobs,
} = genOptions
const top_logprobs = genOptions.topLogprobs > 0 ? topLogprobs : undefined
const logprobs = genOptions.logprobs || top_logprobs > 0
const logprobs = genOptions.logprobs || top_logprobs > 0 ? true : undefined

Check failure on line 821 in packages/core/src/chat.ts

View workflow job for this annotation

GitHub Actions / build

The expression `genOptions.logprobs || top_logprobs > 0 ? true : undefined` may not work as intended due to operator precedence. Consider using parentheses to clarify the logic.
pelikhan marked this conversation as resolved.
Show resolved Hide resolved
traceLanguageModelConnection(trace, genOptions, connectionToken)
const tools: ChatCompletionTool[] = toolDefinitions?.length
? toolDefinitions.map(
Expand Down
34 changes: 27 additions & 7 deletions packages/core/src/connection.ts
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,8 @@
HUGGINGFACE_API_BASE,
OLLAMA_API_BASE,
OLLAMA_DEFAUT_PORT,
MODEL_PROVIDER_GOOGLE,
GOOGLE_API_BASE,
} from "./constants"
import { fileExists, readText, writeText } from "./fs"
import {
Expand Down Expand Up @@ -129,7 +131,7 @@
token,
source: "env: OPENAI_API_...",
version,
}
} satisfies LanguageModelConfiguration
}

if (provider === MODEL_PROVIDER_GITHUB) {
Expand All @@ -148,7 +150,7 @@
type,
token,
source: `env: ${tokenVar}`,
}
} satisfies LanguageModelConfiguration
}

if (provider === MODEL_PROVIDER_AZURE_OPENAI) {
Expand Down Expand Up @@ -194,7 +196,7 @@
: "env: AZURE_OPENAI_API_... + Entra ID",
version,
azureCredentialsType,
}
} satisfies LanguageModelConfiguration
}

if (provider === MODEL_PROVIDER_AZURE_SERVERLESS_OPENAI) {
Expand Down Expand Up @@ -239,7 +241,7 @@
: "env: AZURE_SERVERLESS_OPENAI_API_... + Entra ID",
version,
azureCredentialsType,
}
} satisfies LanguageModelConfiguration
}

if (provider === MODEL_PROVIDER_AZURE_SERVERLESS_MODELS) {
Expand Down Expand Up @@ -281,7 +283,25 @@
? "env: AZURE_SERVERLESS_MODELS_API_..."
: "env: AZURE_SERVERLESS_MODELS_API_... + Entra ID",
version,
}
} satisfies LanguageModelConfiguration
}

if (provider === MODEL_PROVIDER_GOOGLE) {

Check failure on line 289 in packages/core/src/connection.ts

View workflow job for this annotation

GitHub Actions / build

The function `parseTokenFromEnv` for the Google provider does not handle the case where `model` is undefined. Ensure that `model` is defined before returning the configuration.
pelikhan marked this conversation as resolved.
Show resolved Hide resolved
const token = env.GOOGLE_API_KEY
if (!token) return undefined
if (token === PLACEHOLDER_API_KEY)
throw new Error("GOOGLE_API_KEY not configured")
const base = env.GOOGLE_API_BASE || GOOGLE_API_BASE

Check failure on line 294 in packages/core/src/connection.ts

View workflow job for this annotation

GitHub Actions / build

The `base` URL for the Google API is not validated. Consider adding a check to ensure it's a valid URL to prevent potential runtime errors.
pelikhan marked this conversation as resolved.
Show resolved Hide resolved

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The base URL for the Google API is hardcoded. Consider making it configurable to enhance flexibility and adaptability.

generated by pr-review-commit hardcoded_url

if (base === PLACEHOLDER_API_BASE)
throw new Error("GOOGLE_API_BASE not configured")
return {
provider,
model,
base,
token,
type: "openai",
source: "env: GOOGLE_API_...",
} satisfies LanguageModelConfiguration
}
pelikhan marked this conversation as resolved.
Show resolved Hide resolved

if (provider === MODEL_PROVIDER_ANTHROPIC) {
Expand All @@ -301,7 +321,7 @@
base,
version,
source,
}
} satisfies LanguageModelConfiguration
}

if (provider === MODEL_PROVIDER_OLLAMA) {
Expand All @@ -314,7 +334,7 @@
token: "ollama",
type: "openai",
source: "env: OLLAMA_HOST",
}
} satisfies LanguageModelConfiguration
}

if (provider === MODEL_PROVIDER_HUGGINGFACE) {
Expand Down
6 changes: 6 additions & 0 deletions packages/core/src/constants.ts
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,7 @@ export const DEFAULT_VISION_MODEL_CANDIDATES = [
"azure_serverless:gpt-4o",
DEFAULT_MODEL,
"anthropic:claude-2",
"google:gemini-1.5-pro-002",
"github:gpt-4o",
]
export const DEFAULT_SMALL_MODEL = "openai:gpt-4o-mini"
Expand All @@ -78,6 +79,7 @@ export const DEFAULT_SMALL_MODEL_CANDIDATES = [
DEFAULT_SMALL_MODEL,
"anthropic:claude-instant-1",
"github:gpt-4o-mini",
"google:gemini-1.5-flash-002",
"client:gpt-4-mini",
]
export const DEFAULT_EMBEDDINGS_MODEL_CANDIDATES = [
Expand Down Expand Up @@ -160,6 +162,7 @@ export const EMOJI_UNDEFINED = "?"
export const MODEL_PROVIDER_OPENAI = "openai"
export const MODEL_PROVIDER_GITHUB = "github"
export const MODEL_PROVIDER_AZURE_OPENAI = "azure"
export const MODEL_PROVIDER_GOOGLE = "google"
export const MODEL_PROVIDER_AZURE_SERVERLESS_OPENAI = "azure_serverless"
export const MODEL_PROVIDER_AZURE_SERVERLESS_MODELS = "azure_serverless_models"
export const MODEL_PROVIDER_OLLAMA = "ollama"
Expand Down Expand Up @@ -357,3 +360,6 @@ export const CHOICE_LOGIT_BIAS = 5

export const SANITIZED_PROMPT_INJECTION =
"...prompt injection detected, content removed..."

export const GOOGLE_API_BASE =
"https://generativelanguage.googleapis.com/v1beta/openai/"
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ defTool(
},
{
model: "vision",
cache: "vision_ask_image",
system: [
"system",
"system.assistant",
Expand Down
45 changes: 45 additions & 0 deletions packages/core/src/pricing.json
Original file line number Diff line number Diff line change
Expand Up @@ -270,5 +270,50 @@
"azure_serverless_models:ministral-3b": {
"price_per_million_input_tokens": 0.04,
"price_per_million_output_tokens": 0.04
},
"google:gemini-1.5-flash": {
"price_per_million_input_tokens": 0.075,
"price_per_million_output_tokens": 0.3
},
"google:gemini-1.5-flash-002": {
"price_per_million_input_tokens": 0.075,
"price_per_million_output_tokens": 0.3
},
"google:gemini-1.5-flash-8b": {
"price_per_million_input_tokens": 0.0375,
"price_per_million_output_tokens": 0.15,
"tiers": [
{
"context_size": 128000,
"price_per_million_input_tokens": 0.075,
"price_per_million_output_tokens": 0.3
}
]
},
"google:gemini-1.5-pro": {
"price_per_million_input_tokens": 1.25,
"price_per_million_output_tokens": 5,
"tiers": [
{
"context_size": 128000,
"price_per_million_input_tokens": 2.5,
"price_per_million_output_tokens": 10
}
]
},
"google:gemini-1.5-pro-002": {
"price_per_million_input_tokens": 1.25,
"price_per_million_output_tokens": 5,
"tiers": [
{
"context_size": 128000,
"price_per_million_input_tokens": 2.5,
"price_per_million_output_tokens": 10
}
]
},
"google:gemini-1-pro": {
"price_per_million_input_tokens": 0.5,
"price_per_million_output_tokens": 1.5
}
}
4 changes: 4 additions & 0 deletions packages/core/src/tools.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ import {
MODEL_PROVIDER_AZURE_OPENAI,
MODEL_PROVIDER_AZURE_SERVERLESS_MODELS,
MODEL_PROVIDER_GITHUB,
MODEL_PROVIDER_GOOGLE,
MODEL_PROVIDER_OLLAMA,
MODEL_PROVIDER_OPENAI,
} from "./constants"
Expand Down Expand Up @@ -38,6 +39,9 @@ export function isToolsSupported(modelId: string): boolean | undefined {
[MODEL_PROVIDER_OPENAI]: oai,
[MODEL_PROVIDER_AZURE_OPENAI]: oai,
[MODEL_PROVIDER_AZURE_SERVERLESS_MODELS]: oai,
[MODEL_PROVIDER_GOOGLE]: {
// all supported
},
[MODEL_PROVIDER_GITHUB]: {
"Phi-3.5-mini-instruct": false,
},
Expand Down
6 changes: 6 additions & 0 deletions packages/core/src/types/prompt_template.d.ts
Original file line number Diff line number Diff line change
Expand Up @@ -146,6 +146,12 @@ type ModelType = OptionsOrString<
| "anthropic:claude-2.0"
| "anthropic:claude-instant-1.2"
| "huggingface:microsoft/Phi-3-mini-4k-instruct"
| "google:gemini-1.5-flash"
| "google:gemini-1.5-flash-8b"
| "google:gemini-1.5-flash-002"
| "google:gemini-1.5-pro"
| "google:gemini-1.5-pro-002"
| "google:gemini-1-pro"
>

type ModelSmallType = OptionsOrString<
Expand Down
17 changes: 10 additions & 7 deletions packages/core/src/usage.ts
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ import {
MODEL_PROVIDER_AZURE_SERVERLESS_MODELS,
MODEL_PROVIDER_AZURE_SERVERLESS_OPENAI,
MODEL_PROVIDER_GITHUB,
MODEL_PROVIDER_GOOGLE,
MODEL_PROVIDER_OPENAI,
} from "./constants"

Expand Down Expand Up @@ -83,13 +84,15 @@ export function renderCost(value: number) {

export function isCosteable(model: string) {
const { provider } = parseModelIdentifier(model)
return (
provider === MODEL_PROVIDER_OPENAI ||
provider === MODEL_PROVIDER_AZURE_OPENAI ||
provider === MODEL_PROVIDER_AZURE_SERVERLESS_MODELS ||
provider === MODEL_PROVIDER_AZURE_SERVERLESS_OPENAI ||
provider === MODEL_PROVIDER_ANTHROPIC
)
const costeableProviders = [
MODEL_PROVIDER_OPENAI,
MODEL_PROVIDER_AZURE_OPENAI,
MODEL_PROVIDER_AZURE_SERVERLESS_MODELS,
MODEL_PROVIDER_AZURE_SERVERLESS_OPENAI,
MODEL_PROVIDER_ANTHROPIC,
MODEL_PROVIDER_GOOGLE,
]
return costeableProviders.includes(provider)
}

/**
Expand Down
21 changes: 12 additions & 9 deletions packages/vscode/src/lmaccess.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ import {
MODEL_PROVIDER_AZURE_SERVERLESS_MODELS,
MODEL_PROVIDER_AZURE_SERVERLESS_OPENAI,
DOCS_CONFIGURATION_URL,
MODEL_PROVIDER_GOOGLE,
} from "../../core/src/constants"
import { OpenAIAPIType } from "../../core/src/host"
import { parseModelIdentifier } from "../../core/src/models"
Expand All @@ -29,15 +30,17 @@ async function generateLanguageModelConfiguration(
modelId: string
) {
const { provider } = parseModelIdentifier(modelId)
if (
provider === MODEL_PROVIDER_OLLAMA ||
provider === MODEL_PROVIDER_LLAMAFILE ||
provider === MODEL_PROVIDER_AICI ||
provider === MODEL_PROVIDER_AZURE_OPENAI ||
provider === MODEL_PROVIDER_AZURE_SERVERLESS_OPENAI ||
provider === MODEL_PROVIDER_AZURE_SERVERLESS_MODELS ||
provider === MODEL_PROVIDER_LITELLM
) {
const supportedProviders = [
MODEL_PROVIDER_OLLAMA,
MODEL_PROVIDER_LLAMAFILE,
MODEL_PROVIDER_AICI,
MODEL_PROVIDER_AZURE_OPENAI,
MODEL_PROVIDER_AZURE_SERVERLESS_OPENAI,
MODEL_PROVIDER_AZURE_SERVERLESS_MODELS,
MODEL_PROVIDER_LITELLM,
MODEL_PROVIDER_GOOGLE,
]
if (supportedProviders.includes(provider)) {
return { provider }
}

Expand Down
Loading