Skip to content

Commit

Permalink
Merge branch 'vercel:main' into vivek/providers
Browse files Browse the repository at this point in the history
  • Loading branch information
patelvivekdev authored Nov 21, 2024
2 parents fdb631a + ad57459 commit d99827a
Show file tree
Hide file tree
Showing 103 changed files with 2,185 additions and 361 deletions.
10 changes: 7 additions & 3 deletions content/docs/03-ai-sdk-core/05-generating-text.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -122,17 +122,21 @@ const result = streamText({

### `onFinish` callback

When using `streamText`, you can provide an `onFinish` callback that is triggered when the stream is finished.
It contains the text, usage information, finish reason, and more:
When using `streamText`, you can provide an `onFinish` callback that is triggered when the stream is finished (
[API Reference](/docs/reference/ai-sdk-core/stream-text#on-finish)
).
It contains the text, usage information, finish reason, messages, and more:

```tsx highlight="6-8"
import { streamText } from 'ai';

const result = streamText({
model: yourModel,
prompt: 'Invent a new holiday and describe its traditions.',
onFinish({ text, finishReason, usage }) {
onFinish({ text, finishReason, usage, response }) {
// your own logic, e.g. for saving the chat history or recording usage

const messages = response.messages; // messages that were generated
},
});
```
Expand Down
4 changes: 2 additions & 2 deletions content/docs/04-ai-sdk-ui/01-overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@ Here is a comparison of the supported functions across these frameworks:
| [useChat](/docs/reference/ai-sdk-ui/use-chat) | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [useChat](/docs/reference/ai-sdk-ui/use-chat) attachments | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
| [useCompletion](/docs/reference/ai-sdk-ui/use-completion) | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [useObject](/docs/reference/ai-sdk-ui/use-object) | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
| [useAssistant](/docs/reference/ai-sdk-ui/use-assistant) | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
| [useObject](/docs/reference/ai-sdk-ui/use-object) | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Check size={18} /> |
| [useAssistant](/docs/reference/ai-sdk-ui/use-assistant) | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |

<Note>
[Contributions](https://github.com/vercel/ai/blob/main/CONTRIBUTING.md) are
Expand Down
5 changes: 4 additions & 1 deletion content/docs/07-reference/02-ai-sdk-ui/03-use-object.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,10 @@ description: API reference for the useObject hook.

# `experimental_useObject()`

<Note>`useObject` is an experimental feature and only available in React.</Note>
<Note>
`useObject` is an experimental feature and only available in React and
SolidJS.
</Note>

Allows you to consume text streams that represent a JSON object and parse them into a complete object based on a schema.
You can use it together with [`streamObject`](/docs/reference/ai-sdk-core/stream-object) in the backend.
Expand Down
4 changes: 0 additions & 4 deletions content/docs/07-reference/02-ai-sdk-ui/20-use-assistant.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,6 @@ with the UI updated automatically as the assistant is streaming its execution.

This works in conjunction with [`AssistantResponse`](./assistant-response) in the backend.

<Note>
`useAssistant` is supported in `ai/react`, `ai/svelte`, and `ai/vue`.
</Note>

## Import

<Tabs items={['React', 'Svelte']}>
Expand Down
4 changes: 2 additions & 2 deletions content/docs/07-reference/02-ai-sdk-ui/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -66,8 +66,8 @@ Here is a comparison of the supported functions across these frameworks:
| [useChat](/docs/reference/ai-sdk-ui/use-chat) | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [useChat](/docs/reference/ai-sdk-ui/use-chat) attachments | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
| [useCompletion](/docs/reference/ai-sdk-ui/use-completion) | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
| [useObject](/docs/reference/ai-sdk-ui/use-object) | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
| [useAssistant](/docs/reference/ai-sdk-ui/use-assistant) | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> |
| [useObject](/docs/reference/ai-sdk-ui/use-object) | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Check size={18} /> |
| [useAssistant](/docs/reference/ai-sdk-ui/use-assistant) | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |

<Note>
[Contributions](https://github.com/vercel/ai/blob/main/CONTRIBUTING.md) are
Expand Down
7 changes: 7 additions & 0 deletions content/providers/01-ai-sdk-providers/08-amazon-bedrock.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -264,6 +264,13 @@ See the [Amazon Bedrock Guardrails documentation](https://docs.aws.amazon.com/be
| `meta.llama2-70b-chat-v1` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
| `meta.llama3-8b-instruct-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
| `meta.llama3-70b-instruct-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
| `meta.llama3-1-8b-instruct-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
| `meta.llama3-1-70b-instruct-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
| `meta.llama3-1-405b-instruct-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
| `meta.llama3-2-1b-instruct-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
| `meta.llama3-2-3b-instruct-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
| `meta.llama3-2-11b-instruct-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
| `meta.llama3-2-90b-instruct-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
| `mistral.mistral-7b-instruct-v0:2` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
| `mistral.mixtral-8x7b-instruct-v0:1` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
| `mistral.mistral-large-2402-v1:0` | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
Expand Down
97 changes: 97 additions & 0 deletions content/providers/05-observability/braintrust.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
---
title: Braintrust
description: Monitoring and tracing LLM applications with Braintrust
---

# Braintrust Observability

Braintrust is an end-to-end platform for building AI applications. When building with the AI SDK, you can integrate Braintrust to [log](https://www.braintrust.dev/docs/guides/logging), monitor, and take action on real-world interactions.

## Setup

### OpenTelemetry

Braintrust supports [AI SDK telemetry data](/docs/ai-sdk-core/telemetry).
To set up Braintrust as an [OpenTelemetry](https://opentelemetry.io/docs/) backend, you'll need to route the traces to Braintrust's OpenTelemetry endpoint, set your API key, and specify a parent project or experiment.

Once you set up an [OpenTelemetry Protocol Exporter](https://opentelemetry.io/docs/languages/js/exporters/) (OTLP) to send traces to Braintrust, we automatically convert LLM calls into Braintrust `LLM` spans, which can be saved as [prompts](https://www.braintrust.dev/docs/guides/functions/prompts) and evaluated in the [playground](https://www.braintrust.dev/docs/guides/playground).

To use the AI SDK to send telemetry data to Braintrust, set these environment variables in your Next.js app's `.env` file:

```bash
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.braintrust.dev/otel
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <Your API Key>, x-bt-parent=project_id:<Your Project ID>"
```

You can then use the `experimental_telemetry` option to enable telemetry on supported AI SDK function calls:

```typescript
import { createOpenAI } from '@ai-sdk/openai';
import { generateText } from 'ai';

const openai = createOpenAI();

async function main() {
const result = await generateText({
model: openai('gpt-4o-mini'),
prompt: 'What is 2 + 2?',
experimental_telemetry: {
isEnabled: true,
metadata: {
query: 'weather',
location: 'San Francisco',
},
},
});
console.log(result);
}

main();
```

Traced LLM calls will appear under the Braintrust project or experiment provided in the `x-bt-parent` header.

### Model Wrapping

You can wrap AI SDK models in Braintrust to automatically log your requests.

```typescript
import { initLogger, wrapAISDKModel } from 'braintrust';
import { openai } from '@ai-sdk/openai';

const logger = initLogger({
projectName: 'My Project',
apiKey: process.env.BRAINTRUST_API_KEY,
});

const model = wrapAISDKModel(openai.chat('gpt-3.5-turbo'));

async function main() {
// This will automatically log the request, response, and metrics to Braintrust
const response = await model.doGenerate({
inputFormat: 'messages',
mode: {
type: 'regular',
},
prompt: [
{
role: 'user',
content: [{ type: 'text', text: 'What is the capital of France?' }],
},
],
});
console.log(response);
}

main();
```

## Resources

To see a step-by-step example, check out the Braintrust [cookbook](https://www.braintrust.dev/docs/cookbook/recipes/OTEL-logging).

After you log your application in Braintrust, explore other workflows like:

- Adding [tools](https://www.braintrust.dev/docs/guides/functions/tools) to your library and using them in [experiments](https://www.braintrust.dev/docs/guides/evals) and the [playground](https://www.braintrust.dev/docs/guides/playground)
- Creating [custom scorers](https://www.braintrust.dev/docs/guides/functions/scorers) to assess the quality of your LLM calls
- Adding your logs to a [dataset](https://www.braintrust.dev/docs/guides/datasets) and running evaluations comparing models and prompts
3 changes: 2 additions & 1 deletion content/providers/05-observability/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,9 @@ Several LLM observability providers offer integrations with the AI SDK telemetry

- [Langfuse](https://langfuse.com/docs/integrations/vercel-ai-sdk)
- [LangSmith](https://docs.smith.langchain.com/observability/how_to_guides/tracing/trace_with_vercel_ai_sdk)
- [Braintrust](https://www.braintrust.dev/docs/cookbook/recipes/OTEL-logging)
- [Braintrust](/providers/observability/braintrust)
- [Laminar](https://docs.lmnr.ai/tracing/vercel-ai-sdk)
- [HoneyHive](https://docs.honeyhive.ai/integrations/vercel)

There are also providers that provide monitoring and tracing for the AI SDK through model wrappers:

Expand Down
22 changes: 11 additions & 11 deletions examples/ai-core/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,20 +3,20 @@
"version": "0.0.0",
"private": true,
"dependencies": {
"@ai-sdk/amazon-bedrock": "1.0.1",
"@ai-sdk/anthropic": "1.0.1",
"@ai-sdk/azure": "1.0.3",
"@ai-sdk/cohere": "1.0.1",
"@ai-sdk/google": "1.0.1",
"@ai-sdk/google-vertex": "1.0.1",
"@ai-sdk/groq": "1.0.1",
"@ai-sdk/mistral": "1.0.2",
"@ai-sdk/openai": "1.0.2",
"@ai-sdk/xai": "1.0.2",
"@ai-sdk/amazon-bedrock": "1.0.3",
"@ai-sdk/anthropic": "1.0.2",
"@ai-sdk/azure": "1.0.5",
"@ai-sdk/cohere": "1.0.3",
"@ai-sdk/google": "1.0.3",
"@ai-sdk/google-vertex": "1.0.3",
"@ai-sdk/groq": "1.0.3",
"@ai-sdk/mistral": "1.0.3",
"@ai-sdk/openai": "1.0.4",
"@ai-sdk/xai": "1.0.3",
"@opentelemetry/sdk-node": "0.54.2",
"@opentelemetry/auto-instrumentations-node": "0.47.0",
"@opentelemetry/sdk-trace-node": "1.27.0",
"ai": "4.0.2",
"ai": "4.0.3",
"dotenv": "16.4.5",
"mathjs": "12.4.2",
"zod": "3.23.8",
Expand Down
4 changes: 2 additions & 2 deletions examples/express/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@
"type-check": "tsc --noEmit"
},
"dependencies": {
"@ai-sdk/openai": "1.0.2",
"ai": "4.0.2",
"@ai-sdk/openai": "1.0.4",
"ai": "4.0.3",
"dotenv": "16.4.5",
"express": "5.0.1"
},
Expand Down
4 changes: 2 additions & 2 deletions examples/fastify/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
"version": "0.0.0",
"private": true,
"dependencies": {
"@ai-sdk/openai": "1.0.2",
"ai": "4.0.2",
"@ai-sdk/openai": "1.0.4",
"ai": "4.0.3",
"dotenv": "16.4.5",
"fastify": "5.1.0"
},
Expand Down
4 changes: 2 additions & 2 deletions examples/hono/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@
"version": "0.0.0",
"private": true,
"dependencies": {
"@ai-sdk/openai": "1.0.2",
"@ai-sdk/openai": "1.0.4",
"@hono/node-server": "1.12.2",
"ai": "4.0.2",
"ai": "4.0.3",
"dotenv": "16.4.5",
"hono": "4.6.9"
},
Expand Down
4 changes: 2 additions & 2 deletions examples/nest/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,11 @@
"lint": "eslint \"{src,apps,libs,test}/**/*.ts\" --fix"
},
"dependencies": {
"@ai-sdk/openai": "1.0.2",
"@ai-sdk/openai": "1.0.4",
"@nestjs/common": "^10.0.0",
"@nestjs/core": "^10.0.0",
"@nestjs/platform-express": "^10.4.8",
"ai": "4.0.2",
"ai": "4.0.3",
"reflect-metadata": "^0.2.0",
"rxjs": "^7.8.1"
},
Expand Down
4 changes: 2 additions & 2 deletions examples/next-fastapi/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@
"lint": "next lint"
},
"dependencies": {
"@ai-sdk/ui-utils": "1.0.1",
"ai": "4.0.2",
"@ai-sdk/ui-utils": "1.0.2",
"ai": "4.0.3",
"geist": "^1.3.1",
"next": "latest",
"react": "^18",
Expand Down
4 changes: 2 additions & 2 deletions examples/next-langchain/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@
"lint": "next lint"
},
"dependencies": {
"@ai-sdk/react": "1.0.1",
"@ai-sdk/react": "1.0.2",
"@langchain/openai": "0.0.28",
"@langchain/core": "0.1.63",
"ai": "4.0.2",
"ai": "4.0.3",
"langchain": "0.1.36",
"next": "latest",
"react": "^18",
Expand Down
4 changes: 2 additions & 2 deletions examples/next-openai-kasada-bot-protection/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@
"lint": "next lint"
},
"dependencies": {
"@ai-sdk/openai": "1.0.2",
"@ai-sdk/openai": "1.0.4",
"@vercel/functions": "latest",
"ai": "4.0.2",
"ai": "4.0.3",
"next": "latest",
"react": "^18",
"react-dom": "^18",
Expand Down
6 changes: 3 additions & 3 deletions examples/next-openai-pages/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@
"lint": "next lint"
},
"dependencies": {
"@ai-sdk/openai": "1.0.2",
"@ai-sdk/react": "1.0.1",
"ai": "4.0.2",
"@ai-sdk/openai": "1.0.4",
"@ai-sdk/react": "1.0.2",
"ai": "4.0.3",
"next": "latest",
"openai": "4.52.6",
"react": "^18",
Expand Down
6 changes: 3 additions & 3 deletions examples/next-openai-telemetry-sentry/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,15 @@
"lint": "next lint"
},
"dependencies": {
"@ai-sdk/openai": "1.0.2",
"@ai-sdk/react": "1.0.1",
"@ai-sdk/openai": "1.0.4",
"@ai-sdk/react": "1.0.2",
"@opentelemetry/api-logs": "0.54.2",
"@opentelemetry/instrumentation": "0.52.1",
"@opentelemetry/sdk-logs": "0.54.2",
"@sentry/nextjs": "^8.22.0",
"@sentry/opentelemetry": "8.22.0",
"@vercel/otel": "1.9.1",
"ai": "4.0.2",
"ai": "4.0.3",
"next": "latest",
"openai": "4.52.6",
"react": "^18",
Expand Down
6 changes: 3 additions & 3 deletions examples/next-openai-telemetry/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,13 @@
"lint": "next lint"
},
"dependencies": {
"@ai-sdk/openai": "1.0.2",
"@ai-sdk/react": "1.0.1",
"@ai-sdk/openai": "1.0.4",
"@ai-sdk/react": "1.0.2",
"@opentelemetry/api-logs": "0.54.2",
"@opentelemetry/sdk-logs": "0.54.2",
"@opentelemetry/instrumentation": "0.52.1",
"@vercel/otel": "1.9.1",
"ai": "4.0.2",
"ai": "4.0.3",
"next": "latest",
"openai": "4.52.6",
"react": "^18",
Expand Down
4 changes: 2 additions & 2 deletions examples/next-openai-upstash-rate-limits/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@
"lint": "next lint"
},
"dependencies": {
"@ai-sdk/openai": "1.0.2",
"@ai-sdk/openai": "1.0.4",
"@upstash/ratelimit": "^0.4.3",
"@vercel/kv": "^0.2.2",
"ai": "4.0.2",
"ai": "4.0.3",
"next": "latest",
"react": "^18",
"react-dom": "^18",
Expand Down
3 changes: 3 additions & 0 deletions examples/next-openai/app/api/use-chat-tools/route.ts
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,9 @@ export async function POST(req: Request) {
description: 'show the weather in a given city to the user',
parameters: z.object({ city: z.string() }),
execute: async ({}: { city: string }) => {
// Add artificial delay of 2 seconds
await new Promise(resolve => setTimeout(resolve, 2000));

const weatherOptions = ['sunny', 'cloudy', 'rainy', 'snowy', 'windy'];
return weatherOptions[
Math.floor(Math.random() * weatherOptions.length)
Expand Down
2 changes: 1 addition & 1 deletion examples/next-openai/app/api/use-object/route.ts
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ export async function POST(req: Request) {
const context = await req.json();

const result = streamObject({
model: openai('gpt-4-turbo'),
model: openai('gpt-4o'),
prompt: `Generate 3 notifications for a messages app in this context: ${context}`,
schema: notificationSchema,
});
Expand Down
Loading

0 comments on commit d99827a

Please sign in to comment.