You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, I am trying the PPLM with a discriminator on GPU but it still needs around 5 mins to generate 512 tokens. I wonder if there is any way to speed up the inference time?
Many thanks and best regards,
Yijun
The text was updated successfully, but these errors were encountered:
yes, you could try to speed this up by decreasing the number of iteration per token. However, this may lead to a worst result, in term of positivity/negativity, compared to the one reported in the paper.
Thanks for your brilliant work!
Currently, I am trying the PPLM with a discriminator on GPU but it still needs around 5 mins to generate 512 tokens. I wonder if there is any way to speed up the inference time?
Many thanks and best regards,
Yijun
The text was updated successfully, but these errors were encountered: