Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

XML tag using def #921

Merged
merged 22 commits into from
Dec 6, 2024
Merged
Show file tree
Hide file tree
Changes from 8 commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
70dd579
feat: add fenceFormat for flexible code formatting ✨
pelikhan Dec 6, 2024
d9a913f
refactor: ♻️ unify FenceFormat handling and improve comments
pelikhan Dec 6, 2024
db93323
refactor: ♻️ improve fence format handling and node logic
pelikhan Dec 6, 2024
aac977a
chore: 🔄 update promptfoo version to 0.100.3
pelikhan Dec 6, 2024
afaecc6
refactor: update definition ranges in defrange.genai.mjs ✨
pelikhan Dec 6, 2024
1e32918
refactor: update model references to vision 👁️‍🗨️
pelikhan Dec 6, 2024
bde92c1
feat: ➕ Add param 'n: 5' to importTemplate call
pelikhan Dec 6, 2024
95c9575
refactor: update file handling and improve scripts structure 🔧
pelikhan Dec 6, 2024
0e61c60
more docs
pelikhan Dec 6, 2024
9215e93
refactor: update default format and model handling ✨
pelikhan Dec 6, 2024
d0058ec
docs: 📝 update default fence format to XML
pelikhan Dec 6, 2024
b335dba
more docs on structured outputs
pelikhan Dec 6, 2024
097ce41
moving system.promts to .mjs
pelikhan Dec 6, 2024
a51e069
refactor: update function return type 🚀
pelikhan Dec 6, 2024
ea7ec95
fix: update regex and add fence format script 📜
pelikhan Dec 6, 2024
5313935
do not automatically add diagnostics in prompt
pelikhan Dec 6, 2024
cd70acd
ignore generate test files
pelikhan Dec 6, 2024
aa1890a
feat: 🔄 Update import template args handling
pelikhan Dec 6, 2024
05f2222
ignore test files
pelikhan Dec 6, 2024
0a6876a
feat: ✨ add system.files to script configuration
pelikhan Dec 6, 2024
985453d
perf: 🔄 add retry option to fetch requests
pelikhan Dec 6, 2024
c9b79b3
feat: ✨ add fence format option to CLI and docs
pelikhan Dec 6, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/src/content/docs/reference/cli/commands.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,10 +94,10 @@
-o, --out <folder> output folder
-rmo, --remove-out remove output folder if it exists
--cli <string> override path to the cli
-td, --test-delay <string> delay between tests in seconds

Check warning on line 97 in docs/src/content/docs/reference/cli/commands.md

View workflow job for this annotation

GitHub Actions / build

The version number in the `--promptfoo-version` option has been updated from 0.97.0 to 0.100.3. Ensure that this change is intentional and that all users are aware of the new version.
pelikhan marked this conversation as resolved.
Show resolved Hide resolved
--cache enable LLM result cache
-v, --verbose verbose output
-pv, --promptfoo-version [version] promptfoo version, default is 0.97.0
-pv, --promptfoo-version [version] promptfoo version, default is 0.100.3
-os, --out-summary <file> append output summary in file
-g, --groups <groups...> groups to include or exclude. Use :!
prefix to exclude
Expand Down
2 changes: 1 addition & 1 deletion packages/cli/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@
"node": ">=20.0.0"
},
"peerDependencies": {
"promptfoo": "0.97.0"
"promptfoo": "0.100.3"
},
"devDependencies": {
"@types/diff": "^6.0.0",
Expand Down
29 changes: 15 additions & 14 deletions packages/cli/src/run.ts
Original file line number Diff line number Diff line change
Expand Up @@ -250,24 +250,25 @@ export async function runScript(
if (GENAI_ANY_REGEX.test(scriptId)) toolFiles.push(scriptId)

for (const arg of files) {
if (HTTPS_REGEX.test(arg)) {
resolvedFiles.add(arg)
continue
}
const stats = await host.statFile(arg)
if (!stats)
return fail(`file not found: ${arg}`, FILES_NOT_FOUND_ERROR_CODE)
if (stats.type !== "file") continue
if (HTTPS_REGEX.test(arg)) resolvedFiles.add(arg)
else {
const ffs = await host.findFiles(arg, {
applyGitIgnore: excludeGitIgnore,
})
if (!ffs?.length) {
return fail(
`no files matching ${arg} under ${process.cwd()}`,
FILES_NOT_FOUND_ERROR_CODE
)
}
for (const file of ffs) {
resolvedFiles.add(filePathOrUrlToWorkspaceFile(file))
}
const ffs = await host.findFiles(arg, {
applyGitIgnore: excludeGitIgnore,
})
if (!ffs?.length) {
return fail(
`no files matching ${arg} under ${process.cwd()}`,
FILES_NOT_FOUND_ERROR_CODE
)
}
for (const file of ffs) {
resolvedFiles.add(filePathOrUrlToWorkspaceFile(file))
}
}

Expand Down
4 changes: 3 additions & 1 deletion packages/core/src/chat.ts
Original file line number Diff line number Diff line change
Expand Up @@ -663,6 +663,7 @@ async function processChatMessage(
const { errors, messages: participantMessages } =
await renderPromptNode(options.model, node, {
flexTokens: options.flexTokens,
fenceFormat: options.fenceFormat,
trace,
})
if (participantMessages?.length) {
Expand Down Expand Up @@ -737,7 +738,8 @@ export function mergeGenerationOptions(
options?.model ??
runtimeHost.modelAliases.large.model,
temperature:
runOptions?.temperature ?? runtimeHost.modelAliases.large.temperature,
runOptions?.temperature ??
runtimeHost.modelAliases.large.temperature,
embeddingsModel:
runOptions?.embeddingsModel ??
options?.embeddingsModel ??
Expand Down
1 change: 1 addition & 0 deletions packages/core/src/constants.ts
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,7 @@ export const HIGHLIGHT_LENGTH = 4000
export const SMALL_MODEL_ID = "small"
export const LARGE_MODEL_ID = "large"
export const VISION_MODEL_ID = "vision"
export const DEFAULT_FENCE_FORMAT = "markdown"
export const DEFAULT_MODEL = "openai:gpt-4o"
export const DEFAULT_MODEL_CANDIDATES = [
"azure:gpt-4o",
Expand Down
2 changes: 1 addition & 1 deletion packages/core/src/encoders.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ For example, to denote a heading, you add a number sign before it (e.g., # Headi
chunkSize: 128,
chunkOverlap: 16,
model: "gpt-4o",
lineNumbers: true
lineNumbers: true,
}
)
console.log(chunks)
Expand Down
10 changes: 7 additions & 3 deletions packages/core/src/expander.ts
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,9 @@ export async function callExpander(
options: GenerationOptions
) {
assert(!!options.model)
const { provider, model } = parseModelIdentifier(r.model ?? options.model)
const ctx = await createPromptContext(prj, vars, trace, options, model)
const modelId = r.model ?? options.model
const { provider } = parseModelIdentifier(modelId)
const ctx = await createPromptContext(prj, vars, trace, options, modelId)

let status: GenerationStatus = undefined
let statusText: string = undefined
Expand Down Expand Up @@ -90,8 +91,9 @@ export async function callExpander(
fileOutputs: fos,
prediction: pred,
disposables: mcps,
} = await renderPromptNode(model, node, {
} = await renderPromptNode(modelId, node, {
flexTokens: options.flexTokens,
fenceFormat: options.fenceFormat,
trace,
})
messages = msgs
Expand Down Expand Up @@ -213,6 +215,7 @@ export async function expandTemplate(
normalizeInt(env.vars["flexTokens"]) ??
normalizeInt(env.vars["flex_tokens"]) ??
template.flexTokens
const fenceFormat = options.fenceFormat ?? template.fenceFormat
let seed = options.seed ?? normalizeInt(env.vars["seed"]) ?? template.seed
if (seed !== undefined) seed = seed >> 0
let logprobs = options.logprobs || template.logprobs
Expand All @@ -237,6 +240,7 @@ export async function expandTemplate(
topP,
temperature,
lineNumbers,
fenceFormat,
})

const { status, statusText, messages } = prompt
Expand Down
Loading
Loading