Releases: svilupp/PromptingTools.jl
Releases · svilupp/PromptingTools.jl
v0.4.0
PromptingTools v0.4.0
Changes
- Improved AICode parsing and error handling (eg, capture more REPL prompts, detect parsing errors earlier, parse more code fence types), including the option to remove unsafe code (eg, Pkg.add("SomePkg")) with AICode(msg; skip_unsafe=true, vebose=true)
- Added new prompt templates: JuliaRecapTask, JuliaRecapCoTTask, JuliaExpertTestCode and updated JuliaExpertCoTTask to be more robust against early stopping for smaller OSS models
- Added support for MistralAI API via the MistralOpenAISchema(). All their standard models have been registered, so you should be able to just use model="mistral-tiny in your aigenerate calls without any further changes. Remember to either provide api_kwargs.api_key or ensure you have ENV variable MISTRALAI_API_KEY set.
- Added support for any OpenAI-compatible API via schema=CustomOpenAISchema(). All you have to do is to provide your api_key and url (base URL of the API) in the api_kwargs keyword argument. This option is useful if you use Perplexity.ai, Fireworks.ai, or any other similar services.
Merged pull requests:
v0.3.0
PromptingTools v0.3.0
Changes:
Added
- Introduced a set of utilities for working with generate Julia code (Eg, extract code-fenced Julia code with
PromptingTools.extract_code_blocks
) or simply applyAICode
to the AI messages.AICode
tries to extract, parse and eval Julia code, if it fails both stdout and errors are captured. It is useful for generating Julia code and, in the future, creating self-healing code agents - Introduced ability to have multi-turn conversations. Set keyword argument
return_all=true
andai*
functions will return the whole conversation, not just the last message. To continue a previous conversation, you need to provide it to a keyword argumentconversation
- Introduced schema
NoSchema
that does not change message format, it merely replaces the placeholders with user-provided variables. It serves as the first pass of the schema pipeline and allow more code reuse across schemas - Support for project-based and global user preferences with Preferences.jl. See
?PREFERENCES
docstring for more information. It allows you to persist your configuration and model aliases across sessions and projects (eg, if you would like to default to Ollama models instead of OpenAI's) - Refactored
MODEL_REGISTRY
aroundModelSpec
struct, so you can record the name, schema(!) and token cost of new models in a single place. The biggest benefit is that yourai*
calls will now automatically lookup the right model schema, eg, no need to define schema explicitly for your Ollama models! See?ModelSpec
for more information and?register_model!
for an example of how to register a new model
Fixed
- Changed type of global
PROMPT_SCHEMA::AbstractPromptSchema
for an easier switch to local models as a default option
Breaking Changes
API_KEY
global variable has been renamed toOPENAI_API_KEY
to align with the name of the environment variable and preferences
Merged pull requests:
- Use [!TIP] markdown for pro tips (#14) (@caleb-allen)
- Up version minor (#15) (@svilupp)
- Setup CodeCov (#16) (@svilupp)
- update registration + ollama health check (#17) (@svilupp)
- Change PROMPT SCHEMA to an Abstract Type (#18) (@svilupp)
- Add Coding Utils (#19) (@svilupp)
- Return full conversation (#20) (@svilupp)
- extend serialization support to DataMessages (#21) (@svilupp)
- remove julia prompt from code blocks (#22) (@svilupp)
- Change AICode Safety Error to be a Parsing Error (#23) (@svilupp)
- Parse nested code blocks (#24) (@svilupp)
- Preferences.jl integration + new model registry (#25) (@svilupp)
- Tag version v0.3.0 (#26) (@svilupp)
- Update changelog (#27) (@svilupp)
v0.2.0
PromptingTools v0.2.0
Added
- Add support for prompt templates with
AITemplate
struct. Search for suitable templates withaitemplates("query string")
and then simply use them withaigenerate(AITemplate(:TemplateABC); variableX = "some value") -> AIMessage
or use a dispatch on the template name as aSymbol
, eg,aigenerate(:TemplateABC; variableX = "some value") -> AIMessage
. Templates are saved as JSON files in the foldertemplates/
. If you add new templates, you can reload them withload_templates!()
(notice the exclamation mark to override the existingTEMPLATE_STORE
). - Add
aiextract
function to extract structured information from text quickly and easily. See?aiextract
for more information. - Add
aiscan
for image scanning (ie, image comprehension tasks). You can transcribe screenshots or reason over images as if they were text. Images can be provided either as a local file (image_path
) or as an url (image_url
). See?aiscan
for more information. - Add support for Ollama.ai's local models. Only
aigenerate
andaiembed
functions are supported at the moment. - Add a few non-coding templates, eg, verbatim analysis (see
aitemplates("survey")
) and meeting summarization (seeaitemplates("meeting")
), and supporting utilities (non-exported):split_by_length
andreplace_words
to make it easy to work with smaller open source models.
Merged pull requests:
- update version (#1) (@svilupp)
- Add Template Functionality (#2) (@svilupp)
- Add Extraction Functionality (#3) (@svilupp)
- Add more prompt templates (#4) (@svilupp)
- Update tests to account for new templates (#5) (@svilupp)
- Add aiscan (image comprehension) (#6) (@svilupp)
- Remove duplicated docstring in the function call signature (#7) (@svilupp)
- Add ollama support (#8) (@svilupp)
- Create docs from the README file (#9) (@svilupp)
- Add more templates (#10) (@svilupp)
- tag020 (#11) (@svilupp)
- Fail gracefully without api key (#12) (@svilupp)
v0.1.0
Full Changelog: https://github.com/svilupp/PromptingTools.jl/commits/v0.1.0
- Initial release