The Purple Llama project provides tools and models to improve LLM security. This folder contains examples to get started with PurpleLlama tools.
Tool/Model | Description | Get Started |
---|---|---|
Llama Guard | Provide guardrailing on inputs and outputs | Inference, Finetuning |
Prompt Guard | Model to safeguards against jailbreak attempts and embedded prompt injections | Notebook |
Code Shield | Tool to safeguard against insecure code generated by the LLM | Notebook |
The notebooks input_output_guardrails.ipynb, Purple_Llama_Anyscale & Purple_Llama_OctoAI contain examples for running Meta Llama Guard on cloud hosted endpoints.