Skip to content

Security: axonzeta/aibrow

Security

SECURITY.md

Security Policy

Using AiBrow securely

Untrusted inputs

Some models accept various input formats (text, images, audio, etc.). The libraries converting these inputs have varying security levels, so it's crucial to isolate the model and carefully pre-process inputs to mitigate script injection risks.

For maximum security when handling untrusted inputs, you may need to employ the following:

  • Sandboxing: Isolate the environment where the inference happens.
  • Pre-analysis: Check how the model performs by default when exposed to prompt injection (e.g. using fuzzing for prompt injection). This will give you leads on how hard you will have to work on the next topics.
  • Updates: Keep both the AiBrow extension and the native binary AI Helper updated to the latest versions
  • Input Sanitation: Before feeding data to the model, sanitize inputs rigorously. This involves techniques such as:
    • Validation: Enforce strict rules on allowed characters and data types.
    • Filtering: Remove potentially malicious scripts or code fragments.
    • Encoding: Convert special characters into safe representations.
    • Verification: Run tooling that identifies potential script injections (e.g. models that detect prompt injection attempts).

Reporting a vulnerability

Beware that none of the topics under Using AiBrow securely are considered vulnerabilities of AiBrow or the use of LLaMA C++ that it uses.

However, If you have discovered a security vulnerability in this project, please report it privately. Do not disclose it as a public issue. This gives us time to work with you to fix the issue before public exposure, reducing the chance that the exploit will be used before a patch is released.

If the vulnerability is part of the AiBrow extension or native binary AI Helper, please disclose it as a private security advisory.

If the vulnerability is part of the LLaMA C++ library, please refer to the llama.cpp security policy.

Please give us at least 90 days to work on a fix before public exposure.

There aren’t any published security advisories