Skip to content

Commit

Permalink
Enhance token handling and refactor tool execution (#754)
Browse files Browse the repository at this point in the history
* Add token handling improvements and error serialization to tool execution

* Refactor function to tool nomenclature, add token handling, and implement tool options.

* Add logWarn for error handling and adjust tracing in tool calls

* Refactor: Rename "functions" to "tools" in promptdom.ts for clarity

* Replace `writeText` with `assistant` function and update schema formatting across files.

* Add pricing data and cost calculation to GenerationStats with API integration

* Refactor cost calculation into `estimateCost` function for reuse.

* Add count parameter, increase token limits, and update cost rendering logic
  • Loading branch information
pelikhan authored Oct 4, 2024
1 parent a92d366 commit 316f1e8
Show file tree
Hide file tree
Showing 36 changed files with 876 additions and 163 deletions.
29 changes: 25 additions & 4 deletions docs/genaisrc/genaiscript.d.ts

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

23 changes: 6 additions & 17 deletions docs/src/content/docs/reference/scripts/response-priming.md
Original file line number Diff line number Diff line change
@@ -1,59 +1,51 @@
---
title: Response Priming
sidebar:
order: 100
order: 100
description: Learn how to prime LLM responses with specific syntax or format
using the writeText function in scripts.
using the writeText function in scripts.
keywords: response priming, LLM syntax, script formatting, writeText function,
assistant message
assistant message
genaiscript:
model: openai:gpt-3.5-turbo

model: openai:gpt-3.5-turbo
---

It is possible to provide the start of the LLM response (`assistant` message) in the script.
This allows to steer the answer of the LLM to a specify syntax or format.

Use `writeText` with the `{assistant: true}` option to provide the assistant text.
Use `assistant` function to provide the assistant text.

```js
$`List 5 colors. Answer with a JSON array. Do not emit the enclosing markdown.`

// help the LLM by starting the JSON array syntax
// in the assistant response
writeText(`[`, { assistant: true })
assistant(`[`)
```

<!-- genaiscript output start -->

<details>
<summary>👤 user</summary>


```markdown wrap
List 5 colors. Answer with a JSON array. Do not emit the enclosing markdown.
```


</details>


<details open>
<summary>🤖 assistant</summary>


```markdown wrap
[
```


</details>


<details open>
<summary>🤖 assistant</summary>


```markdown wrap
"red",
"blue",
Expand All @@ -63,13 +55,10 @@ List 5 colors. Answer with a JSON array. Do not emit the enclosing markdown.
]
```


</details>

<!-- genaiscript output end -->



### How does it work?

Internally when invoking the LLM, an additional message is added to the query as if the LLM had generated this content.
Expand Down
5 changes: 5 additions & 0 deletions docs/src/content/docs/reference/scripts/system.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -1043,6 +1043,10 @@ defTool(
enum: ["success", "failure"],
description: "Filter runs by completion status",
},
count: {
type: "number",
description: "Number of runs to list. Default is 20.",
},
},
},
async (args) => {
Expand All @@ -1053,6 +1057,7 @@ defTool(
const res = await github.listWorkflowRuns(workflow_id, {
branch,
status,
count: 20,
})
return CSV.stringify(
res.map(({ id, name, conclusion, head_sha }) => ({
Expand Down
29 changes: 25 additions & 4 deletions genaisrc/genaiscript.d.ts

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

29 changes: 25 additions & 4 deletions packages/auto/genaiscript.d.ts

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion packages/core/src/aici.ts
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ export async function renderAICI(functionName: string, root: PromptNode) {
},
// Unsupported node types
image: notSupported("image"),
function: notSupported("function"),
tool: notSupported("tool"),
assistant: notSupported("assistant"),
schema: notSupported("schema"),
// Capture output processors
Expand Down
Loading

0 comments on commit 316f1e8

Please sign in to comment.