Skip to content

Releases: svilupp/PromptingTools.jl

v0.4.0

13 Dec 22:15
Compare
Choose a tag to compare

PromptingTools v0.4.0

Diff since v0.3.0

Changes

  • Improved AICode parsing and error handling (eg, capture more REPL prompts, detect parsing errors earlier, parse more code fence types), including the option to remove unsafe code (eg, Pkg.add("SomePkg")) with AICode(msg; skip_unsafe=true, vebose=true)
  • Added new prompt templates: JuliaRecapTask, JuliaRecapCoTTask, JuliaExpertTestCode and updated JuliaExpertCoTTask to be more robust against early stopping for smaller OSS models
  • Added support for MistralAI API via the MistralOpenAISchema(). All their standard models have been registered, so you should be able to just use model="mistral-tiny in your aigenerate calls without any further changes. Remember to either provide api_kwargs.api_key or ensure you have ENV variable MISTRALAI_API_KEY set.
  • Added support for any OpenAI-compatible API via schema=CustomOpenAISchema(). All you have to do is to provide your api_key and url (base URL of the API) in the api_kwargs keyword argument. This option is useful if you use Perplexity.ai, Fireworks.ai, or any other similar services.

Merged pull requests:

v0.3.0

10 Dec 14:30
Compare
Choose a tag to compare

PromptingTools v0.3.0

Diff since v0.2.0

Changes:

Added

  • Introduced a set of utilities for working with generate Julia code (Eg, extract code-fenced Julia code with PromptingTools.extract_code_blocks ) or simply apply AICode to the AI messages. AICode tries to extract, parse and eval Julia code, if it fails both stdout and errors are captured. It is useful for generating Julia code and, in the future, creating self-healing code agents
  • Introduced ability to have multi-turn conversations. Set keyword argument return_all=true and ai* functions will return the whole conversation, not just the last message. To continue a previous conversation, you need to provide it to a keyword argument conversation
  • Introduced schema NoSchema that does not change message format, it merely replaces the placeholders with user-provided variables. It serves as the first pass of the schema pipeline and allow more code reuse across schemas
  • Support for project-based and global user preferences with Preferences.jl. See ?PREFERENCES docstring for more information. It allows you to persist your configuration and model aliases across sessions and projects (eg, if you would like to default to Ollama models instead of OpenAI's)
  • Refactored MODEL_REGISTRY around ModelSpec struct, so you can record the name, schema(!) and token cost of new models in a single place. The biggest benefit is that your ai* calls will now automatically lookup the right model schema, eg, no need to define schema explicitly for your Ollama models! See ?ModelSpec for more information and ?register_model!for an example of how to register a new model

Fixed

  • Changed type of global PROMPT_SCHEMA::AbstractPromptSchema for an easier switch to local models as a default option

Breaking Changes

  • API_KEY global variable has been renamed to OPENAI_API_KEY to align with the name of the environment variable and preferences

Merged pull requests:

v0.2.0

29 Nov 10:58
Compare
Choose a tag to compare

PromptingTools v0.2.0

Diff since v0.1.0

Added

  • Add support for prompt templates with AITemplate struct. Search for suitable templates with aitemplates("query string") and then simply use them with aigenerate(AITemplate(:TemplateABC); variableX = "some value") -> AIMessage or use a dispatch on the template name as a Symbol, eg, aigenerate(:TemplateABC; variableX = "some value") -> AIMessage. Templates are saved as JSON files in the folder templates/. If you add new templates, you can reload them with load_templates!() (notice the exclamation mark to override the existing TEMPLATE_STORE).
  • Add aiextract function to extract structured information from text quickly and easily. See ?aiextract for more information.
  • Add aiscan for image scanning (ie, image comprehension tasks). You can transcribe screenshots or reason over images as if they were text. Images can be provided either as a local file (image_path) or as an url (image_url). See ?aiscan for more information.
  • Add support for Ollama.ai's local models. Only aigenerate and aiembed functions are supported at the moment.
  • Add a few non-coding templates, eg, verbatim analysis (see aitemplates("survey")) and meeting summarization (see aitemplates("meeting")), and supporting utilities (non-exported): split_by_length and replace_words to make it easy to work with smaller open source models.

Merged pull requests:

v0.1.0

09 Nov 19:17
Compare
Choose a tag to compare