-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
added validation using tonic validate #141
base: main
Are you sure you want to change the base?
Conversation
RetrievalPrecisionMetric() | ||
]) | ||
run = scorer.score_responses([llm_response]) | ||
print(run.overall_scores) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from stampy_chat import logging
logging.info(run.overall_scores)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can also send messages to discord with the logger
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would be really cool! I will add that in!
result = chain.invoke({"query": query, 'history': history}, {'callbacks': []}) | ||
|
||
#Validate results | ||
contexts=[c['text'] for c in get_top_k_blocks(query, 5)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you're querying the vectorstore twice for each request, which will make things a lot slower - how hard would it be to reuse the previously fetched examples?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes you are right. I had a lot of challenges in this part and it took the majority of my time with this ticket.
Unfortunately, I could not figure out how to get a hook into the LangChain Semantic Similarity Selector. Without a hook, there is no way to get the previously fetched examples. I also looked at the source code of RAGAs (another RAG validation framework) and saw that they too could not figure out how to put a hook into LangChain and stopped supporting it last year.
I tested the latency and saw that querying the vectorstore does not add much latency and unlike querying an LLM, there is no additional cost to it. This is why I decided to query the vector store again. I think we can put this in the backlog and monitor if LangChain updates their API
AugmentationAccuracyMetric(), | ||
RetrievalPrecisionMetric() | ||
]) | ||
run = scorer.score_responses([llm_response]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this actually returned to the user? If not, then I'd suggest doing it after notifying the frontend that everything is done, i.e. moving the if callback (...)
clause before these
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. I will make this change. The reason I kept it this way is because I was worried that a "fast" user might ask the next question before the evaluation is done running. This might overwhelm the system and spawn a lot of processes.
I don't think this will be useful for our users. We may want to track it internally first and then decide what would be the best way to display this to users. For instance, I don't think users will know what "context-precision" is or care about it.
result = chain.invoke({"query": query, 'history': history}, {'callbacks': []}) | ||
|
||
#Validate results | ||
contexts=[c['text'] for c in get_top_k_blocks(query, 5)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm guessing this shouldn't be executed for each query. How about adding a flag to the settings object?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that this is required for each query because tonic_validate
, checks to see if the LLM's answer is consistent with the context that was retrieved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, but should it always do that, is the question. This makes the query slower, so for now I'd just use it for testing, rather than always running it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I get what you mean. I will add a flag to the settings object to prevent it from running all the time.
you have something wrong with the pipenv dependancies, which you'll have to fix. Did you add dependancies with |
0d257ce
to
af7bc23
Compare
Yes that is how I added dependencies. Is that the wrong way of doing it? I am not familiar with pipenv (have been using vanilla virtualenv all this time), so maybe I missed a step? |
Screen.Recording.2024-05-20.at.10.39.56.PM.mp4
Demo video attached. For now, I am printing the output. But I can either push it to langsmith or to a db.