Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: semantic search for large repos vector store toolkit #23

Closed
wants to merge 27 commits into from

Conversation

michaelneale
Copy link
Collaborator

@michaelneale michaelneale commented Aug 28, 2024

this is using sentence transformers and embeddings to create a simple vector database to allow semantic search of large codebases to help goose navigate around.

model info:

To test:

uv run goose session start --profile vector

with a ~/.config/goose/profiles.yaml with:

vector:
  provider: openai
  processor: gpt-4o
  accelerator: gpt-4o-mini
  moderator: truncate
  toolkits:
  - name: developer
    requires: {}
  - name: vector
    requires: {}   

Then try some query to ask where to add a feature, or anything which you think needs a semantic match

image

@michaelneale michaelneale changed the title Vector store semantic search for large repos: vector store Aug 28, 2024
@michaelneale michaelneale changed the title semantic search for large repos: vector store semantic search for large repos: vector store toolkit Aug 29, 2024
@michaelneale michaelneale marked this pull request as ready for review August 29, 2024 21:36
@lifeizhou-ap
Copy link
Collaborator

lifeizhou-ap commented Sep 2, 2024

I've tried a scenario with the toolkits with vector and without vector.

  • It seems the configuration with vector is more consistent and quicker to find the relevant files (although the first time
    it has to build the vector, the time is ok, not long). 👍

  • I saw a warning message below but I guess it should be fine? (since the vector is created by the code that the user provides)

goose/src/goose/toolkit/vector.py:115: FutureWarning: You are using `torch.load` with 
`weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct 
malicious pickle data which will execute arbitrary code during unpickling (See 
https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default 
value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. 
Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via 
`torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have
full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
  data = torch.load(db_path)

def vector_toolkit():
return VectorToolkit(notifier=MagicMock())

def test_query_vector_db_creates_db(temp_dir, vector_toolkit):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can use tmp_path directly instead of temp_dir.

tmp_path is the built-in fixture in pytest. https://docs.pytest.org/en/latest/how-to/tmp_path.html#tmp-path

from pathlib import Path


GOOSE_GLOBAL_PATH = Path("~/.config/goose").expanduser()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can import GOOSE_GLOBAL_PATH from config.py

@michaelneale
Copy link
Collaborator Author

@lifeizhou-ap thanks - yes good catch, it should only load weights so that warning should go away.

pyproject.toml Outdated Show resolved Hide resolved
Copy link
Collaborator

@codefromthecrypt codefromthecrypt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the description as it helps me understand how this works IRL

vector_toolkit.create_vector_db(temp_dir.as_posix())
query = 'print("Hello World")'
result = vector_toolkit.query_vector_db(temp_dir.as_posix(), query)
print("Query Result:", result)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

excuse python noob.. do we want these prints? I guess they aren't visible by default, so it doesn't matter

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah you have to run pytest in another mode to see them

temp_db_path = vector_toolkit.get_db_path(temp_dir.as_posix())
assert os.path.exists(temp_db_path)
assert os.path.getsize(temp_db_path) > 0
assert 'No embeddings available to query against' in result or '\n' in result
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose in the future, we could make an integration test with ollama for this one, or possibly an in-memory embeddings lib?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah - something scaled down and deterministic ideally

@michaelneale
Copy link
Collaborator Author

@lifeizhou-ap do you mind giving this a try again and see if it is as good as before for you?

@baxen
Copy link
Collaborator

baxen commented Sep 3, 2024

Very excited to try this out!

To match the rest of how goose works, I think it makes sense if we delegate the embedding off to the provider. That's a bigger refactor, but then it avoids installing heavy dependencies with goose out of the box(torch, locally downloading a model). It might drive higher performance too, but would need to test that. What do you think?

src/goose/toolkit/vector.py Outdated Show resolved Hide resolved
@lifeizhou-ap
Copy link
Collaborator

@lifeizhou-ap do you mind giving this a try again and see if it is as good as before for you?

LGTM!

@michaelneale
Copy link
Collaborator Author

@baxen do you mean each provider has its own embeddings impl local to it? Would that gain much over having just one (as it is all local, and not provider specific) or do you mean lives in exchange alongside providers? (and they can offer their own if they want?). Just not sure what benefit would be? (I might be missing something) but I am sure is doable. Wouldn't this also still bring over the dependencies as the providers are bundled together (if in exchange?) - ie there is no "lazy loading" of dependencies (I think?)

Copy link
Collaborator

@codefromthecrypt codefromthecrypt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

quick drive by

tests/toolkit/test_vector.py Outdated Show resolved Hide resolved
tests/toolkit/test_vector.py Outdated Show resolved Hide resolved
@michaelneale michaelneale changed the title semantic search for large repos: vector store toolkit feat: semantic search for large repos vector store toolkit Sep 12, 2024
@michaelneale
Copy link
Collaborator Author

@baxen according to goose:

image

So that is not small - unfortunately a optional dependency isn't really viable for a CLI?

@michaelneale
Copy link
Collaborator Author

going to have a lot at some lightweight options here, and failing that, I will make this an optional and validate that (and likely merge it after that point).

@michaelneale
Copy link
Collaborator Author

hey @baxen how does this look with optional deps now?

@ahau-square
Copy link
Collaborator

A few thoughts

  • Code embedding search seems like a promising direction to pursue
  • We should consider and test different chunking strategies - embedding code snippets e.g., classes/functions rather or in addition to whole code files to get more pinpointed search
  • Probably worth benchmarking and evaluating the embedding models against alternatives e.g., ones specifically for code (https://huggingface.co/Salesforce/codet5p-110m-embedding, https://huggingface.co/bigcode/starencoder)
  • Why limit to models that can be run locally vs. use hosted models like the OpenAI embeddings API or potentially others that Block hosts e.g., through the Databricks model gateway?
  • Is the future idea to eventually have a vector store of code embeddings for each repo and have them be updated on merge? Doing so might lend itself to a better experience of not having to wait for your embeddings to compute.
  • From a UX perspective - I don't know how useful identifying similar files on their own is - but similar files fed in as context to a ChatGPT/Claude for someone to then ask questions over or generate code based on could be very useful

@michaelneale
Copy link
Collaborator Author

@ahau-square

From a UX perspective - I don't know how useful identifying similar files on their own is - but similar files fed in as context to a ChatGPT/Claude for someone to then ask questions over or generate code based on could be very useful

That is exactly what this aims to do in a simple way - that is all that is needed (the toolkit isn't for end users to see - but to help goose find where to look which is then used as context).

I think future idea would be for embeddings to change (but they aren't meant to be search - so for relatively stable codebase isn't a huge deal). Could certainly run it with other models and approaches - but the idea of a toolkit is you can use it or not (but also would like to have something that is "batteries included" for goose - if it is this approach or another, as I think as it is it needs help to find code to work on).

@michaelneale
Copy link
Collaborator Author

this approach with local model(s) works quite well, but it is a hefty dependency addition to goose. Remote/server based embeddings and search is one option (but very specific to provider and probably more work to maintain across - not sure of exact benefit yet). Another approach is to use tools like rq but with fuzzy searching plus some pre-expansion of a question into related terms: like you search for "intellisense" - then the accelerator model could expand that to "content assist... completion" etc (as per users intent) and then do a more keyword like search for that (porter stemming would be the old way, but with accelerator models I think we can do better). Won't be as good for code specific comprehension though so I still like the idea of a local ephemeral embeddings/vector and indexing system or service.

@michaelneale michaelneale added enhancement New feature or request work-in-progress labels Sep 25, 2024
@michaelneale
Copy link
Collaborator Author

@baxen I can't work out how optional deps work with UV (they used to work - but not there now).

@michaelneale
Copy link
Collaborator Author

I am going to close this for now - but keep the branch around

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request work-in-progress
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants