Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

报错求助 #5

Open
debug1818 opened this issue Jul 12, 2024 · 0 comments
Open

报错求助 #5

debug1818 opened this issue Jul 12, 2024 · 0 comments

Comments

@debug1818
Copy link

作者您好,请问你们有碰到这种报错吗?
Traceback (most recent call last):
File "train_our_policy.py", line 209, in
main(sys_args)
File "train_our_policy.py", line 156, in main
trainer.optimize_batch(num_batches, episode)
File "/home/user/GCRL-min-AoI/method/trainer.py", line 81, in optimize_batch
loss.backward()
File "/home/user/anaconda3/envs/mcs/lib/python3.8/site-packages/torch/_tensor.py", line 363, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/user/anaconda3/envs/mcs/lib/python3.8/site-packages/torch/autograd/init.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [128, 61, 32]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant