-
Notifications
You must be signed in to change notification settings - Fork 653
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
about tan(s) #9
Comments
the second. |
Quote from the paper: But with the current example code it seems like adding tanh will result in a better result. Still both results are quite accurate: (Note: do not confuse this tanh with the tanh at the input a.k.a. LstmState.g) tl;dr: Without or with tanh() is both possible. |
I think the back-propagation also has to deal with this addition by probably adding a reverse tanh calculation. But because I don't fully understand the back-propagation yet, I can't pinpoint what to do exactly. Funny that the overall loss is already better without the back-prop fix. I also tried replacing these lines in the backpropagation (located at the begin of the function top_diff_is): I changed them into (added np.tanh around both s values): This resulted in an even better loss of 4.26917706433e-07. But I am skeptic about the correctness over here. Anyway, only saying this for people who want to add the tanh for performance improvements. I am not saying this should be added to the code. The code is more simplistic without the tanh. This makes the code better to understand for learning purposes. |
hello. I have got a lot after read your commit,but i hanve a question here ,if we and tanh , the first should be: |
hi, sorry to bother you again
I read this line in your code: self.state.h = self.state.s * self.state.o
but when I found in the paper, it saids may be like this:
self.state.h = np.tanh(self.state.s) * self.state.o
would you tell me which one is right?
The text was updated successfully, but these errors were encountered: