-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support parameter derivatives #142
Conversation
@peastman, I do not know how to connect CustomCVForce with the parameter derivative functionality in TorchForce. tforce = ot.TorchForce(model, {"useCUDAGraphs": "false"})
# Add a parameter
parameter1 = 1.0
parameter2 = 1.0
tforce.setOutputsForces(return_forces)
tforce.addGlobalParameter("parameter1", parameter1)
tforce.addGlobalParameter("parameter2", parameter2)
# Enable energy derivatives for the parameters
tforce.addEnergyParameterDerivative("parameter1")
tforce.addEnergyParameterDerivative("parameter2")
if use_cv_force:
# Wrap TorchForce into CustomCVForce
force = mm.CustomCVForce("force")
force.addCollectiveVariable("force", tforce)
else:
force = tforce For > assert np.allclose(
r2,
state.getEnergyParameterDerivatives()["parameter1"],
)
TestParameterDerivatives.py:116:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <openmm.openmm.mapstringdouble; proxy of <Swig Object of type 'std::map< std::string,double > *' at 0x7f5dc4807ea0> >, key = 'parameter1'
def __getitem__(self, key):
> return _openmm.mapstringdouble___getitem__(self, key)
E IndexError: key not found
The CUDA and OpenCL platforms yield the above error, but the Reference platform just returns 0, being silently incorrect. |
There is a corner case I do not know how to handle. class EnergyForceWithParameters(pt.nn.Module):
def __init__(self):
super(EnergyForceWithParameters, self).__init__()
def forward(
self, positions: Tensor, parameter1: Tensor, parameter2: Tensor
) -> Tuple[Tensor, Tensor]:
positions.requires_grad_(True)
x2 = positions.pow(2).sum(dim=1)
u_harmonic = ((parameter1 + parameter2**2) * x2).sum()
# This way of computing the forces forcefully leaves out the parameter derivatives
grad_outputs: List[Optional[Tensor]] = [pt.ones_like(u_harmonic)]
dy = pt.autograd.grad(
[u_harmonic],
[positions],
grad_outputs=grad_outputs,
create_graph=False,
retain_graph=False, # This cannot be False if parameter derivatives are needed
)[0]
assert dy is not None
forces = -dy
return u_harmonic, forces TorchMD-Net does exactly this. Not sure what to do about it. |
// The derivative is stored in the gradient of the parameter tensor. | ||
double derivative = gradientTensors[i].item<double>(); | ||
auto name = energyParameterDerivatives[i]; | ||
derivs[name] = derivative; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should I be summing here instead of overwritting?
It looks like we both started working on this. I'm almost finished with implementing it. |
welp... Take what you want from here if you find anything useful. |
Really sorry for the confusion! My implementation is at #141. If you can review it, that would be great. |
Sorry, that should have been #143. |
Allows to get energy derivatives with respect to global parameters in TorchForce as in the following example:
Closes #140
Closes #141