You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, it's my honor to read your paper. I'm very interested in your work.
But I found the knowledge attention mask mentioned in the paper has not implemente in your code. I just found as followings: def prepare_attention_matrix(self, kg, weights): attention_mat = np.array((5,5)) return attention_mat
I don't know whether it was my negligence or what the reason was. I hope to receive your reply. Thanks.
The text was updated successfully, but these errors were encountered:
Hi, it's my honor to read your paper. I'm very interested in your work.
But I found the knowledge attention mask mentioned in the paper has not implemente in your code. I just found as followings:
def prepare_attention_matrix(self, kg, weights): attention_mat = np.array((5,5)) return attention_mat
I don't know whether it was my negligence or what the reason was. I hope to receive your reply. Thanks.
The text was updated successfully, but these errors were encountered: