Skip to content

Commit

Permalink
add studio oepns
Browse files Browse the repository at this point in the history
  • Loading branch information
pelikhan committed Nov 26, 2024
1 parent 57b047f commit fabf2ee
Show file tree
Hide file tree
Showing 4 changed files with 90 additions and 4 deletions.
56 changes: 54 additions & 2 deletions docs/src/content/docs/getting-started/configuration.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -1078,6 +1078,58 @@ script({
})
```

## LMStudio

The `lmstudio` provider connects to the [LMStudio](https://lmstudio.ai/) headless server.
and allows to run local LLMs.

<Steps>

<ol>

<li>

Install [LMStudio](https://lmstudio.ai/download) (v0.3.5+)

</li>

<li>

Open LMStudio, open the settings (Gearwheel icon) and enable **Enable Local LLM Service**.

</li>

<li>

GenAIScript assumes the local server is at `http://localhost:1234/v1` by default.
Add a `LMSTUDIO_API_BASE` environment variable to change the server URL.

```txt title=".env"
LMSTUDIO_API_BASE=http://localhost:2345/v1
```

</li>

</ol>

</Steps>

Open the [Model Catalog](https://lmstudio.ai/models),
select your model and find the model identifier
in the docs. Typically, it will be the identifier used in the `lms` example

```sh
lms get <modelid>
```

then use that identifier in your script:

```js '"lmstudio:llama-3.2-1b"'
script({
model: "lmstudio:llama-3.2-1b",
})
```

## LocalAI

[LocalAI](https://localai.io/) act as a drop-in replacement REST API that’s compatible
Expand Down Expand Up @@ -1118,9 +1170,9 @@ that allows you to run an LLM locally.

The provider is `llamafile` and the model name is ignored.

## Jan, LMStudio, LLaMA.cpp
## Jan, LLaMA.cpp

[Jan](https://jan.ai/), [LMStudio](https://lmstudio.ai/),
[Jan](https://jan.ai/),
[LLaMA.cpp](https://github.com/ggerganov/llama.cpp/tree/master/examples/server)
also allow running models locally or interfacing with other LLM vendors.

Expand Down
27 changes: 25 additions & 2 deletions packages/core/src/connection.ts
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,8 @@ import {
ALIBABA_BASE,
MODEL_PROVIDER_MISTRAL,
MISTRAL_API_BASE,
MODEL_PROVIDER_LMSTUDIO,
LMSTUDIO_API_BASE,
} from "./constants"
import { fileExists, readText, writeText } from "./fs"
import {
Expand Down Expand Up @@ -427,27 +429,48 @@ export async function parseTokenFromEnv(
}

if (provider === MODEL_PROVIDER_LLAMAFILE) {
const base =
findEnvVar(env, "LLAMAFILE", BASE_SUFFIX)?.value ||
LLAMAFILE_API_BASE
if (!URL.canParse(base)) throw new Error(`${base} must be a valid URL`)
return {
provider,
model,
base: LLAMAFILE_API_BASE,
base,
token: "llamafile",
type: "openai",
source: "default",
}
}

if (provider === MODEL_PROVIDER_LITELLM) {
const base =
findEnvVar(env, "LITELLM", BASE_SUFFIX)?.value || LITELLM_API_BASE
if (!URL.canParse(base)) throw new Error(`${base} must be a valid URL`)
return {
provider,
model,
base: LITELLM_API_BASE,
base,
token: "litellm",
type: "openai",
source: "default",
}
}

if (provider === MODEL_PROVIDER_LMSTUDIO) {
const base =
findEnvVar(env, "LMSTUDIO", BASE_SUFFIX)?.value || LMSTUDIO_API_BASE
if (!URL.canParse(base)) throw new Error(`${base} must be a valid URL`)
return {
provider,
model,
base,
token: "lmstudio",
type: "openai",
source: "env: LMSTUDIO_API_...",
}
}

if (provider === MODEL_PROVIDER_TRANSFORMERS) {
return {
provider,
Expand Down
9 changes: 9 additions & 0 deletions packages/core/src/constants.ts
Original file line number Diff line number Diff line change
Expand Up @@ -155,6 +155,7 @@ export const GOOGLE_API_BASE =
export const ALIBABA_BASE =
"https://dashscope-intl.aliyuncs.com/compatible-mode/v1"
export const MISTRAL_API_BASE = "https://api.mistral.ai/v1"
export const LMSTUDIO_API_BASE = "http://localhost:1234/v1"

export const PROMPTFOO_CACHE_PATH = ".genaiscript/cache/tests"
export const PROMPTFOO_CONFIG_DIR = ".genaiscript/config/tests"
Expand Down Expand Up @@ -184,6 +185,7 @@ export const MODEL_PROVIDER_HUGGINGFACE = "huggingface"
export const MODEL_PROVIDER_TRANSFORMERS = "transformers"
export const MODEL_PROVIDER_ALIBABA = "alibaba"
export const MODEL_PROVIDER_MISTRAL = "mistral"
export const MODEL_PROVIDER_LMSTUDIO = "lmstudio"

export const TRACE_FILE_PREVIEW_MAX_LENGTH = 240

Expand All @@ -208,6 +210,8 @@ export const DOCS_CONFIGURATION_AZURE_MODELS_SERVERLESS_URL =
"https://microsoft.github.io/genaiscript/getting-started/configuration/#azure_serverless_models"
export const DOCS_CONFIGURATION_OLLAMA_URL =
"https://microsoft.github.io/genaiscript/getting-started/configuration/#ollama"
export const DOCS_CONFIGURATION_LMSTUDIO_URL =
"https://microsoft.github.io/genaiscript/getting-started/configuration/#lmstudio"
export const DOCS_CONFIGURATION_LLAMAFILE_URL =
"https://microsoft.github.io/genaiscript/getting-started/configuration/#llamafile"
export const DOCS_CONFIGURATION_LITELLM_URL =
Expand Down Expand Up @@ -295,6 +299,11 @@ export const MODEL_PROVIDERS = Object.freeze([
detail: "Ollama local model",
url: DOCS_CONFIGURATION_OLLAMA_URL,
},
{
id: MODEL_PROVIDER_LMSTUDIO,
detail: "LM Studio local server",
url: DOCS_CONFIGURATION_LMSTUDIO_URL,
},
{
id: MODEL_PROVIDER_ALIBABA,
detail: "Alibaba models",
Expand Down
2 changes: 2 additions & 0 deletions packages/vscode/src/lmaccess.ts
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ import {
DOCS_CONFIGURATION_URL,
MODEL_PROVIDER_GOOGLE,
MODEL_PROVIDER_ALIBABA,
MODEL_PROVIDER_LMSTUDIO,
} from "../../core/src/constants"
import { OpenAIAPIType } from "../../core/src/host"
import { parseModelIdentifier } from "../../core/src/models"
Expand All @@ -39,6 +40,7 @@ async function generateLanguageModelConfiguration(
MODEL_PROVIDER_AZURE_SERVERLESS_OPENAI,
MODEL_PROVIDER_AZURE_SERVERLESS_MODELS,
MODEL_PROVIDER_LITELLM,
MODEL_PROVIDER_LMSTUDIO,
MODEL_PROVIDER_GOOGLE,
MODEL_PROVIDER_ALIBABA,
]
Expand Down

0 comments on commit fabf2ee

Please sign in to comment.