-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
1 changed file
with
35 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,35 @@ | ||
# Security Policy | ||
|
||
- [**Using AiBrow securely**](#using-aibrow-securely) | ||
- [Untrusted inputs](#untrusted-inputs) | ||
- [**Reporting a vulnerability**](#reporting-a-vulnerability) | ||
|
||
## Using AiBrow securely | ||
|
||
### Untrusted inputs | ||
|
||
Some models accept various input formats (text, images, audio, etc.). The libraries converting these inputs have varying security levels, so it's crucial to isolate the model and carefully pre-process inputs to mitigate script injection risks. | ||
|
||
For maximum security when handling untrusted inputs, you may need to employ the following: | ||
|
||
* Sandboxing: Isolate the environment where the inference happens. | ||
* Pre-analysis: Check how the model performs by default when exposed to prompt injection (e.g. using [fuzzing for prompt injection](https://github.com/FonduAI/awesome-prompt-injection?tab=readme-ov-file#tools)). This will give you leads on how hard you will have to work on the next topics. | ||
* Updates: Keep both the AiBrow extension and the native binary AI Helper updated to the latest versions | ||
* Input Sanitation: Before feeding data to the model, sanitize inputs rigorously. This involves techniques such as: | ||
* Validation: Enforce strict rules on allowed characters and data types. | ||
* Filtering: Remove potentially malicious scripts or code fragments. | ||
* Encoding: Convert special characters into safe representations. | ||
* Verification: Run tooling that identifies potential script injections (e.g. [models that detect prompt injection attempts](https://python.langchain.com/docs/guides/safety/hugging_face_prompt_injection)). | ||
|
||
## Reporting a vulnerability | ||
|
||
Beware that none of the topics under [Using AiBrow securely](#using-aibrow-securely) are considered vulnerabilities of AiBrow or the use of LLaMA C++ that it uses. | ||
|
||
<!-- normal version --> | ||
However, If you have discovered a security vulnerability in this project, please report it privately. **Do not disclose it as a public issue.** This gives us time to work with you to fix the issue before public exposure, reducing the chance that the exploit will be used before a patch is released. | ||
|
||
If the vulnerability is part of the AiBrow extension or native binary AI Helper, please disclose it as a private [security advisory](https://github.com/axonzeta/aibrow/security/advisories/new). | ||
|
||
If the vulnerability is part of the LLaMA C++ library, please refer to [the llama.cpp security policy](https://github.com/ggerganov/llama.cpp/SECURITY.md). | ||
|
||
Please give us at least 90 days to work on a fix before public exposure. |