diff --git a/docs/src/components/BuiltinAgents.mdx b/docs/src/components/BuiltinAgents.mdx
index d27df7e75d..e2999ab826 100644
--- a/docs/src/components/BuiltinAgents.mdx
+++ b/docs/src/components/BuiltinAgents.mdx
@@ -6,8 +6,8 @@ import { LinkCard } from '@astrojs/starlight/components';
### Builtin Agents
-
-
-
-
-
+
+
+
+
+
diff --git a/docs/src/content/docs/guides/llm-agents.mdx b/docs/src/content/docs/guides/llm-agents.mdx
index 242eb33e53..645e85d0b3 100644
--- a/docs/src/content/docs/guides/llm-agents.mdx
+++ b/docs/src/content/docs/guides/llm-agents.mdx
@@ -9,81 +9,44 @@ import { Code } from "@astrojs/starlight/components"
import { Steps } from "@astrojs/starlight/components"
import source from "../../../../../packages/sample/genaisrc/github-agent.genai.mts?raw"
-The "agent" is a special kind of [tool](/genaiscript/reference/scripts/tools) that
-uses an [inline prompt](/genaiscript/reference/scripts/inline-prompts) to solve a task. The inline prompt may use another set of tools.
+An **[agent](/genaiscript/reference/scripts/agents)** is a special kind of [tool](/genaiscript/reference/scripts/tools) that
+uses an [inline prompt](/genaiscript/reference/scripts/inline-prompts) and [tools](/genaiscript/reference/scripts/tools) to solve a task.
-Let's illustrate this concept by create a GitHub agent, `agent_github`. The agent will use the LLM to answer queries about GitHub.
+## Usage
-## Example: GitHub failure investigator
+We want to build a script that can investigate the most recent run failures in a GitHub repository using GitHub Actions.
+To do so, we probably will need to the following agents:
-Using an agent to query GitHub would allow to write the generic run failure investigation script below.
-Most of the details are vague and left to be figured out by the LLM.
+- query the GitHub API, `agent_github`
+- compute some git diff to determine which changes broken the build, `agent_git`
+- read or search files `agent_fs`
-
-
-## Defining `agent_github`
-
-We start by using [defTool](/genaiscript/reference/scripts/tools) to define the tool. The agent will receive a query, prompt an LLM and return the output.
-
-```js wrap
-defTool(
- "agent_github",
- // tool description is important! it lets the LLM know what the tool does
- "Agent that can query GitHub to accomplish tasks",
- // tool parameters is a simple query string
- {
- query: {
- type: "string",
- description: "Query to answer",
- },
- required: ["query"]
- },
- async (args) => {
- // destructure the query from the args
- const { query } = args
+```js wrap title="github-investigator.genai.mts"
+script({
+ tools: ["agent_fs", "agent_git", "agent_github", ...],
+ ...
+})
```
-## Defining the inline prompt
-
-Inside the tool, we use `runPrompt` to run an LLM query.
+Each of these agent is capable of calling an LLM with a specific set of tools to accomplish a task.
-- the prompt takes the query argument and tells the LLM how to handle it.
-- note the use of `ctx.` to nested prompt
+The full script source code is available below:
-```js wrap
- const res = await runPrompt(
- (ctx) => {
- ctx.def("QUERY", query)
- ctx.$`Your are a helpfull LLM agent that can query GitHub to accomplish tasks.
+
- Analyze and answer QUERY.
+## To split or not to split
- - Assume that your answer will be analyzed by an LLM, not a human.
- - If you cannot answer the query, return an empty string.
- `
- }, {
- ...
- }
- )
- return res
-```
-
-## Selecting the tools, system prompts
+You could try to load all the tools in the same LLM call and run the task as a single LLM conversation.
+Results may vary.
-We use the `system` parameter to configure the tools that exposed to the LLM. In this case, we expose the GitHub tools (`system.github_files`, `system.github_issues`, ...)
-
-```js wrap
- {
- system: [
- "system",
- "system.tools",
- "system.explanations",
- "system.github_actions",
- "system.github_files",
- "system.github_issues",
- "system.github_pulls",
- ],
- }
+```js wrap title="github-investigator.genai.mts"
+script({
+ tools: ["fs", "git", "github", ...],
+ ...
+})
```
-
-This full source of this agent is defined in the [system.agent_github](/genaiscript/reference/scripts/system/#systemagent_github) system prompt.
diff --git a/docs/src/content/docs/reference/scripts/agents.mdx b/docs/src/content/docs/reference/scripts/agents.mdx
index 287ff040c6..50479fbbff 100644
--- a/docs/src/content/docs/reference/scripts/agents.mdx
+++ b/docs/src/content/docs/reference/scripts/agents.mdx
@@ -35,3 +35,95 @@ defAgent(
- the description of the agent will automatically be augmented with information about the available tools
+
+## Expample `agent_github`
+
+Let's illustrate building by building a GitHub agent. The agent is a tool that receives a query and executes a LLM prompt with GitHub-related tools.
+
+The definition of the agent looks like this:
+
+```js wrap
+defAgent(
+ "github", // id
+ "query GitHub to accomplish tasks", // description
+ // callback to inject content in the LLM agent prompt
+ (ctx) =>
+ ctx.$`You are a helpful LLM agent that can query GitHub to accomplish tasks.`,
+ {
+ // list tools that the agent can use
+ tools: ["github_actions"],
+ }
+)
+```
+
+and internally it is expanded to the following:
+
+```js wrap
+defTool(
+ // agent_ is always prefixed to the agent id
+ "agent_github",
+ // the description is augmented with the tool descriptions
+ `Agent that can query GitHub to accomplish tasks
+
+ Capabilities:
+ - list github workflows
+ - list github workflows runs
+ ...`,
+ // all agents have a single "query" parameter
+ {
+ query: {
+ type: "string",
+ description: "Query to answer",
+ },
+ required: ["query"]
+ },
+ async(args) => {
+ const { query } = args
+ ...
+ })
+```
+
+Inside callback, we use `runPrompt` to run an LLM query.
+
+- the prompt takes the query argument and tells the LLM how to handle it.
+- note the use of `ctx.` to nested prompt
+
+```js wrap
+ const res = await runPrompt(
+ (ctx) => {
+ // callback to inject content in the LLM agent prompt
+ ctx.$`You are a helpful LLM agent that can query GitHub to accomplish tasks.`
+
+ ctx.def("QUERY", query)
+ _.$`Analyze and answer QUERY.
+ - Assume that your answer will be analyzed by an LLM, not a human.
+ - If you cannot answer the query, return an empty string.
+ `
+ }, , {
+ system: [...],
+ // list of tools that the agent can use
+ tools: ["github_actions", ...]
+ }
+ )
+ return res
+```
+
+## Selecting the tools, system prompts
+
+We use the `system` parameter to configure the tools that exposed to the LLM. In this case, we expose the GitHub tools (`system.github_files`, `system.github_issues`, ...)
+
+```js wrap
+ {
+ system: [
+ "system",
+ "system.tools",
+ "system.explanations",
+ "system.github_actions",
+ "system.github_files",
+ "system.github_issues",
+ "system.github_pulls",
+ ],
+ }
+```
+
+This full source of this agent is defined in the [system.agent_github](/genaiscript/reference/scripts/system/#systemagent_github) system prompt.
diff --git a/packages/core/bundleprompts.js b/packages/core/bundleprompts.js
index 2629b19365..6b0e4475d4 100644
--- a/packages/core/bundleprompts.js
+++ b/packages/core/bundleprompts.js
@@ -265,7 +265,7 @@ ${functions
.filter(({ kind }) => kind === "agent")
.map(
({ id, name, description }) =>
- ``
+ ``
)
.join("\n")}
`,