You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In your paper, the deleterious score is defined as the difference in log-likelihood. However, in your code, it seems that the implementation is based on the difference of logits (noting that the logits do not directly represent probabilities).
While this difference does not affect the ranking of scores, and therefore metrics like AUROC remain consistent, I noticed a discrepancy when analyzing the peak values of the deleterious score. This was highlighted in the updated version of Extended Fig. 4 in your paper. This raises a key point of confusion for me: since the score does not represent actual probabilities, how should we interpret or analyze the specific definition and implications of this score in practical contexts?
Thank you in advance for your clarification.
The text was updated successfully, but these errors were encountered:
In your paper, the deleterious score is defined as the difference in log-likelihood. However, in your code, it seems that the implementation is based on the difference of logits (noting that the logits do not directly represent probabilities).
While this difference does not affect the ranking of scores, and therefore metrics like AUROC remain consistent, I noticed a discrepancy when analyzing the peak values of the deleterious score. This was highlighted in the updated version of Extended Fig. 4 in your paper. This raises a key point of confusion for me: since the score does not represent actual probabilities, how should we interpret or analyze the specific definition and implications of this score in practical contexts?
Thank you in advance for your clarification.
The text was updated successfully, but these errors were encountered: