-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
transducer grad compute formular #37
Comments
Where did you find that? |
The README.md says:
It only says |
Sorry, I got it wrong. So for the known conclusion, |
|
No. You can find the conclusions in the colab (listed in the README.md).
Please ask the author of warp-transducer. |
我刚刚又跑了一遍上面的 colab notebook, 发现复现不了以前的结果了。不知道哪里出问题了。 |
所以这个问题还有吗?可能是cuda版本问题? BTW, 能把colab里的torch版本固定吗? 上次跑了下,发现无法跑通。 |
readme.md 中,给的 colab notebook, 里面使用了 我今天试的 colab notebook, 被分配到了 如果你能在 (我稍后在本地的 v100 gpu 中,看能不能复现).
可以的。 |
The formular for gradient is below in
warprnnt_numba
andwarp_transducer cpu
:that is not same to
torchaudio
,optimized_transducer
and ,warp_transducer gpu
,but you said that
warp_transducer cpu
grad is same tooptimized_transducer
andtorchaudio
, how that is achieved?The text was updated successfully, but these errors were encountered: