You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 19, 2024. It is now read-only.
I realized the fastText code at the underlying , and adopted the gradient ascending method when calculating gradient, which could be completed by training small sample data. However, when training large sample data, update the embedding matrix and add lr*grad to each word vector. After several epoches, the embedding matrix will explode directly (nan). We want to know how the underlying embedding matrix is updated.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I realized the fastText code at the underlying , and adopted the gradient ascending method when calculating gradient, which could be completed by training small sample data. However, when training large sample data, update the embedding matrix and add lr*grad to each word vector. After several epoches, the embedding matrix will explode directly (nan). We want to know how the underlying embedding matrix is updated.
The text was updated successfully, but these errors were encountered: