pair_nequip compilation #21
-
Hi, I trained and froze a model for my system and I'm trying to run it by installing the pair_nequip enabled lammps. I cloned the lammps and pair_nequip distributions as mentioned in the git README and built them with externally compiled CUDA, cuDNN and libtorch. Compilation and linking all goes perfectly fine but when trying to run MD with lammps, I get the following error: LAMMPS (29 Sep 2021 - Update 2) Traceback of TorchScript, original code (most recent call last): I'm not entirely sure what's going on. I have also tried using pip installed pytorch libraries but I still get the same error. There's also no version mismatch as I'm using CUDA 11.3 and libtorch 1.10.0 as recommended. I found some discussion on StackOverflow about this error. Maybe this will be relevant: https://stackoverflow.com/questions/56741087/how-to-fix-runtimeerror-expected-object-of-scalar-type-float-but-got-scalar-typ Thanks in advance. |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 2 replies
-
hm... are you sure? I don't really see how this could happen otherwise. The current implementation of pair_nequip always inputs floats, so if it found doubles they have to be coming from inside the network. Can you use Python to check, load the TorchScript of your deployed model from |
Beta Was this translation helpful? Give feedback.
-
Hi, Thanks for your response. You're right. I rechecked the training file and I did set the model data type to If not, could I somehow cast the |
Beta Was this translation helpful? Give feedback.
-
Hi @atulcthakur , I've been meaning to add an option to do that, but in the meantime something this simple will do:
Out of curiosity, was there a specific need that made you choose to use |
Beta Was this translation helpful? Give feedback.
-
Hi @Linux-cpp-lisp, Thanks. Looks easy enough. Thanks again for the help. |
Beta Was this translation helpful? Give feedback.
-
Certainly. I think my training time might have been slightly higher because of the I think it'd be nice to have some sort of exception handling to enforce float32 dtype until doubles are implemented or simple note in docs would be helpful too. |
Beta Was this translation helpful? Give feedback.
Hi @atulcthakur ,
I've been meaning to add an option to do that, but in the meantime something this simple will do:
Out of curiosity, was there a specific need that made you choose to use
float64
?