Skip to content
This repository has been archived by the owner on Nov 10, 2023. It is now read-only.

why optimizer.zero_grad() after optimizer.step()? #15

Open
littleWangyu opened this issue Aug 19, 2021 · 0 comments
Open

why optimizer.zero_grad() after optimizer.step()? #15

littleWangyu opened this issue Aug 19, 2021 · 0 comments

Comments

@littleWangyu
Copy link

in train epoch:
why optimizer.zero_grad() after optimizer.step()?
Does it matter?
It's usually optimizer.step()--loss.backward()--optimizer.step()

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant