-
Notifications
You must be signed in to change notification settings - Fork 44.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
💡Support Local Embeddings #347
Labels
Comments
This feature is blocking Fully Air-Gapped Offline Auto-GPT |
Related issue #273 |
+1 |
This issue was closed automatically because it has been stale for 10 days with no activity. |
github-project-automation
bot
moved this from Up for consideration
to Done
in AutoGPT Roadmap
Sep 17, 2023
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Duplicates
Summary 💡
Local Models (LLaMA & its finetunes) now work in a fork of Auto-GPT, including with Pinecone Embeddings. See #25 (comment)
Local models and embeddings offer better privacy, lower costs, and enable new uses, like Auto-GPT experiments in private/air-gapped networks. To get these benefits, we should add local (offline) embeddings storage and recall to Auto-GPT.
Examples 🌈
A version of ooba's text-generation-webui, called wawawario2/long_term_memory, has done this using zarr and Numpy. Check wawawario2/long_term_memory#how-it-works-behind-the-scenes
Though the Auto-GPT fork uses ooba's webui API for local models, the long_term_memory project is closely tied to ooba's UI. We mention it only as a reference. We need to create a similar setup in Auto-GPT.
Motivation 🔦
By adding local embeddings storage and recall to Auto-GPT, users get more control, flexibility, and benefits like privacy, cost savings, and accessibility.
The text was updated successfully, but these errors were encountered: