You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the application of RAG, embedding and rerank models are just as important as LLMs. Currently, the most effective options include those from Voyage, Cohere, Jina, and bge. Could you consider developing related classes and independent configurations for the embedding and rerank models provided by these vendors?
At the moment, we can only define them in a way compatible with OpenAI. However, OpenAI does not provide a rerank model, so we have to connect to the API ourselves using RestClient.
The text was updated successfully, but these errors were encountered:
@kevintsai1202 thanks for reporting this. You might be interested in #1811, where we are tracking upcoming RAG-related features including implementation for reranking and Cohere support.
About embedding models, you can customise the EmbeddingModel object when you configure your instance of VectorStore or VectorStoreDocumentRetriever. Would that help with your use case?
@kevintsai1202 thanks for reporting this. You might be interested in #1811, where we are tracking upcoming RAG-related features including implementation for reranking and Cohere support.
About embedding models, you can customise the EmbeddingModel object when you configure your instance of VectorStore or VectorStoreDocumentRetriever. Would that help with your use case?
Thank you. In fact, I noticed that Part 2 supports Cohere, so it should also integrate other embedding model providers. Additionally, the rerank model is applied in the post-retrieval stage. What modules or implementation interfaces are currently available to accomplish this?
In the application of RAG, embedding and rerank models are just as important as LLMs. Currently, the most effective options include those from Voyage, Cohere, Jina, and bge. Could you consider developing related classes and independent configurations for the embedding and rerank models provided by these vendors?
At the moment, we can only define them in a way compatible with OpenAI. However, OpenAI does not provide a rerank model, so we have to connect to the API ourselves using RestClient.
The text was updated successfully, but these errors were encountered: