Skip to content
This repository has been archived by the owner on Mar 1, 2024. It is now read-only.

Inference time is not matching with reported #100

Open
saswatidana opened this issue Nov 2, 2021 · 0 comments
Open

Inference time is not matching with reported #100

saswatidana opened this issue Nov 2, 2021 · 0 comments

Comments

@saswatidana
Copy link

It is reported in the paper that Intel Xeon CPU E5-2698 v4 @ 2.20GHz and 512GB memory is used for time profiling. Also "On the WikilinksNED Unseen-Mentions test dataset which contains 10K queries, it takes 9.2 ms on average to return top 100 candidates per query in batch mode".

Does that mean, inference is done only in CPU and no GPU was used ( in case of 9.2 ms )? Also can you please share what were some of the parameter values such as "max_seq_length", "max_cand_length", "max_context_length" and "eval_batch_size" in this particular case (9.2 ms inference time)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant