Skip to content

Latest commit

 

History

History
73 lines (43 loc) · 3.69 KB

adding_a_model.md

File metadata and controls

73 lines (43 loc) · 3.69 KB

Adding a Model to the MTEB Leaderboard

The MTEB Leaderboard is available here. To submit to it:

  1. Run the desired model on MTEB:

Either use the Python API:

import mteb

# load a model from the hub (or for a custom implementation see https://github.com/embeddings-benchmark/mteb/blob/main/docs/reproducible_workflow.md)
model = mteb.get_model("sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2")

tasks = mteb.get_tasks(...) # get specific tasks
# or 
tasks = mteb.get_benchmark("MTEB(eng, classic)") # or use a specific benchmark

evaluation = mteb.MTEB(tasks=tasks)
evaluation.run(model, output_folder="results")

Or using the command line interface:

mteb run -m {model_name} -t {task_names}

These will save the results in a folder called results/{model_name}/{model_revision}.

  1. Push Results to the Leaderboard

To add results to the public leaderboard you can push your results to the results repository afterwards they will appear on the leaderboard after a day.

  1. (Optional) Add the results using to the model card:

mteb implements a cli for adding results to the model card:

mteb create_meta --results_folder results/{model_name}/{model_revision} --output_path model_card.md

To add the content to the public model simply copy the content of the model_card.md file to the top of a README.md file of your model on the Hub. See here for an example.

If the readme already exists:

mteb create_meta --results_folder results/{model_name}/{model_revision} --output_path model_card.md --from_existing your_existing_readme.md 

Note that if you can run the model on many tasks, this can lead to an excessively large readme frontmatter.

  1. Wait for a refresh the leaderboard:

The leaderboard automatically refreshes daily so once submitted you only need to wait for the automatic refresh. You can find the workflows for the leaderboard refresh here. If you experience issues with the leaderboard please create an issue.

Notes:

  • We remove models with scores that cannot be reproduced, so please ensure that your model is accessible and scores can be reproduced.

  • Using Prompts with Sentence Transformers

    If your model uses Sentence Transformers and requires different prompts for encoding the queries and corpus, you can take advantage of the prompts parameter.

    Internally, mteb uses the prompt named query for encoding the queries and passage as the prompt name for encoding the corpus. This is aligned with the default names used by Sentence Transformers.

    Adding the prompts in the model configuration (Preferred)

    You can directly add the prompts when saving and uploading your model to the Hub. For an example, refer to this configuration file.

    Instantiating the Model with Prompts

    If you are unable to directly add the prompts in the model configuration, you can instantiate the model using the sentence_transformers_loader and pass prompts as an argument. For more details, see the mteb/models/bge_models.py file.