You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 1, 2024. It is now read-only.
It is reported in the paper that Intel Xeon CPU E5-2698 v4 @ 2.20GHz and 512GB memory is used for time profiling. Also "On the WikilinksNED Unseen-Mentions test dataset which contains 10K queries, it takes 9.2 ms on average to return top 100 candidates per query in batch mode".
Does that mean, inference is done only in CPU and no GPU was used ( in case of 9.2 ms )? Also can you please share what were some of the parameter values such as "max_seq_length", "max_cand_length", "max_context_length" and "eval_batch_size" in this particular case (9.2 ms inference time)
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
It is reported in the paper that Intel Xeon CPU E5-2698 v4 @ 2.20GHz and 512GB memory is used for time profiling. Also "On the WikilinksNED Unseen-Mentions test dataset which contains 10K queries, it takes 9.2 ms on average to return top 100 candidates per query in batch mode".
Does that mean, inference is done only in CPU and no GPU was used ( in case of 9.2 ms )? Also can you please share what were some of the parameter values such as "max_seq_length", "max_cand_length", "max_context_length" and "eval_batch_size" in this particular case (9.2 ms inference time)
The text was updated successfully, but these errors were encountered: