-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jina reranker(turbo/tiny) being classified as embedding models #325
Comments
Resolution Steps
References/docs/docs/index.md
|
Damn , Greptile is pretty useless. |
Does e.g. something like this work? https://huggingface.co/jinaai/jina-reranker-v1-turbo-en/discussions/10 aka |
@John42506176Linux The detail is that you need to set |
Assumed it was something simple, thanks for the quick response, I am testing your first comment rn. |
K looks good. Thanks for the quick fix, I appreciate the quick response, I'll open a pr, for the tiny model soon. (Need to finish testing for the turbo model first), but thanks saved me some time :). |
@michaelfeil Tiny gives the following error when making the config.json change RuntimeError: Error(s) in loading state_dict for JinaBertForSequenceClassification: |
@John42506176Linux Seems like the reranker model has 2 outputs. That is not how rerankers are trained. Rerankers usually have one and only one output class. With all respect, I don't have time to fix Jina's questionable choice for training here. The config file is ambiguous and leaves a lot of room for how to load the model. |
No worries, you already saved me time, by helping with turbo. Thanks for the assistance. |
Hi @michaelfeil and @John42506176Linux , adding I tested it with:
And it correclty shows up as a rerank model:
|
@wirthual Thanks so much! This is exactly the solution for this model! |
System Info
System Info:
AWS EC2 G4dn
Amazon Linux
Model:jinaai/jina-reranker-v1-tiny-en or jinaai/jina-reranker-v1-turbo-en
Hardware: Nvidia-smi
Using latest docker version
Command:
port=7997
rerank_model=jinaai/jina-reranker-v1-tiny-en
volume=$PWD/data
sudo docker run -it --gpus all
-v $volume:/app/.cache
-p $port:$port
michaelf34/infinity:latest
v2
--batch-size 256
--model-id $rerank_model
--port $port
Information
Tasks
Reproduction
Run the following command,
port=7997
rerank_model=jinaai/jina-reranker-v1-tiny-en
volume=$PWD/data
sudo docker run -it --gpus all
-v $volume:/app/.cache
-p $port:$port
michaelf34/infinity:latest
v2
--batch-size 256
--model-id $rerank_model
--port $port
Then attempt to use the /rerank endpoint with a simple body
{
"query": "test",
"documents": [
"test"
],
"return_documents": false,
"model": "jinaai/jina-reranker-v1-tiny-en"
}
and you will get the following error
"error": {
"message": "ModelNotDeployedError: model=
jinaai/jina-reranker-v1-tiny-en
does not supportrerank
. Reason: the loaded moded cannot fullyfillrerank
.options are {'embed'}.","type": null,
"param": null,
"code": 400
}
}.
I've tested this with other inference servers like Text embedding inference and the same error occurs, however, it does not occur with the standard transformer library.
Expected behavior
Should be able to rerank with these models.
The text was updated successfully, but these errors were encountered: