-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SHARC mit pysharc-directory #115
Comments
Dear Jakob, Best, |
Dear Sebastian I tried to make It clearer. I did all the preparation according to the normal SHARC approch and started the traj normally (wanted the normal SHARC approch for molecule A). At some point during the MD, I tried to implement the LVC (to run a different molecule (Molecule B) with LVC). At that time, I changed the $SHARC to the $pysharc (which does "point" to the PySHARC bin.) to test if the PYSHARC works. But I did forget to change it back, e.g., when I resubmitted molecule A, I did run it with PYSHARC and not SHARC. I was wondering if this is an issue since during the MD, there was no error. Also, I set in the preparation that SHARC should use ORCA and NOT the LVC Hamiltonian. Also, what irritates me is that whenback to the $SHARC path, it I set the path does not work anymore. If you need a traj folder to check, I would be happy to send it to you or ask you to tell me what to look for. I hope it is a bit clearer now. Thank you Best |
Dear Jakob, I do not really see a reason why the trajectories should not work anymore when switching back to the original $SHARC path. I would need to look at such a trajectory. Best, |
Dear Sebastian I checked, and the PYSHARC/bin does not contain the sharc.x instead, it contains only the pysharc_qmout.py and the pysharc_lvc.py. Maybe I was not telling you this correctly. Sorry. The SHARC folder has one sharc/bin($SHARC), and the pysharc folder contains a bin folder with the files ($pysharc) with pysharc_qmout.py and the pysharc_lvc.py in it. Sorry for the confusion; how can I check the respective traj for consistency? Also, Is there a possibility to check the calculated PES (e.g. LVC or not). Concerning the time it needed to calculate the next step: the time only changed (~x2) when I reduced the number of CPUs from 16 to 8. Which makes sense. But as far as I understand from you there is also the possibility it does calculate the LVC traj with the normal SHARC approach (which is logically a waste of time). How can I check whether this happened or not in my case? I can send you a link by Email for the traj to check. Thank you Best |
Hi Jakob, That said, if your sharc.x/ORCA trajectories kept on going when you changed $SHARC, then this means that the trajectories did not even experience this change of the variable. Note that a currently running executable and its subprocesses will not be affected if you change an environment variable. In SHARC, the calculation will only notice this when you restart, because this will create a new instance of the executable and new subprocesses. So if your sharc.x/ORCA trajectory kept on going when you changed $SHARC, it likely did not affect anything and the trajectories should be fine. If you want, I can still quickly check a trajectory if you send me a link. Best, |
Dear Sebastian I'm sorry for not getting back to you sooner. I did not check the mails yesterday evening. Thank you for the offer to check the traj quickly. I will send you a link to the files shortly. When I understand you correctly, it should be no issue since in every step, a new SHARC is called once more, and if the program did not complain - which it did not - it should be good. And I will change the path to the sharc.x eg $SHARC as soon as the calculation is done. In the meantime, I have experienced a second issue since the cluster I work on will be updating from CentOS 7 to Rocky Linux 9. When I log in on the new system, "all of a sudden", SHARC does not find the ORCA.engrad.ground.grad.tmp: I don't quite understand what has changed because I have changed nothing (e.g., the ORCA version and the loaded modules are the same). A standard ORCA calculation works fine on the new system. Thank you once again for checking the traj. Best |
Hi Jakob, Best, |
Dear Sebastian Thank you for checking the TRAJ. I assume that the others are fine, too. I checked the ORCA.log, the job did crash in the "Exchange-Correlation gradient ...", but unfortunately, I did not know what to look for further. I uploaded the TRAJ in the same folder as before, if you want to have a look. But now the following error turned up: Also, I have realized that the path is a bit strange since the TRAJ_101 does not contain a master_1 folder, a mater folder is given in TRAJ_101/QM/SCRATCH Thank you Best |
Unfortunately, I also do not know why ORCA crashes there. Is the crash reproducible? If it is reproducible, you might want to check at the ORCA forum. But this does not seem to be a SHARC-related issue. Best, |
Dear Sebastian Thank you for checking. I have copied the input file in the master_1 folder to a folder outside of the sharc-folder structure, and there I am running the ORCA.inp. It works fine until now. Assuming that it is not orca (at least with no loaded modules) and not sharc, I can only imagine that it has to do with some loaded modules. Since I load a bundle before submitting SHARC, I would look at each loaded module itself and will also check if they interfere with orca! What modules are required to run sharc? I assume numpy. But are there any other packages you need to run Sharc? Thank you Best |
Hi Jakob, |
Dear Sebastian Sorry to disturb you again I found out what was wrong: the libscalapack.so it was not found, so this issue was solved. unfortianalty "STOP 1 #===================================================# Is there a list that does refer to these error codes? Thank you Best |
Hi Jakob, However, there is no exit code 6656 in SHARC_ORCA.py. I found an interesting note at https://stackoverflow.com/questions/59141319/calling-curl-command-from-c-returns-unexpected-error-codes-like-1792-and-6656 saying that exit codes might depend on endianness of your system. In that case, 6656 might actually mean 26, which is the SHARC_ORCA.py exit code for 'Could not find Orca version!'. The ORCA version is read from calling ORCA with a non-existing input file and reading the header from stdout. If the interface could not find the ORCA version, it is likely that ORCA could not be successfully started. Maybe some more libraries are missing? Best, |
Dear Sebastian Im sorry that this is not solved already. You are correct, the orca path was not given correctly when I changed it (I did enter the path only leasing up to the bin folder and not to the orca executable itself.); I tried two-fold: first, I tried to start the simulation with the $SHARC/sharc.x, which resulted in: And the second approch I did stay with $pysharc, which did get me no feedback at all In both cases, no master_1 folder was created, and the error message in the QM.err was: Traceback (most recent call last): I also tried multiple libraries, all combinations between the SCiPy-bundle and anaconda3, and the needed shared library. Before the change I only used the SciPy bundle, and it worked. On a side note, when I forgot the needed sharc library, the $pysharc path led to the error path of sharc.x, which also answered my original question. Thank you for your continuous support Best |
Well, the message "You have to give an initial state!" means that you are missing the "state" keyword in the SHARC input file. Maybe you did not call SHARC correctly? Pysharc is intended for fast calculations (not with ab initio interfaces) and has optimized output for this purpose. Since you are using ORCA, you do not need to use pysharc. I do not know what is the error message for the ORCA interface about. Seems like ORCA did not run successfully, and then the output files were missing. Generally, I have to say that most of your problem descriptions are very hard to understand and that without the output files it is even harder. For the next messages, please be more clear what you did and attach relevant output files. I also recommend that you maybe just setup a new SHARC installation from scratch, with the recommended CONDA and without pysharc (if not needed), and that you follow the manual and use the setup scripts as described there. Best, |
Dear Sebastian Sorry that I was too unspecific; I will try to be more specific in the future. I will try to install SHARC from scratch once. Best Jakob |
Dear Sebastian SHARC was set up correctly! Which I will tried to explain in detail in the following: To check if the SHARC was set up correctly, I did setup a new traj according to the tutorial. I did everything up to the generation "ICOND", at this point, I have run the "sh all_run_init.sh", which did return an error: " This is why I suspect that something is wrong, and I may need to restart the calculation (as you mentioned above already). However, since the TRAJs are relatively long (I had around 50 fs to go out of 300 fs) and I did 60 TRAJ, I will discuss this with my supervisor to decide what to do.) How did you check the traj which I did send you, to check if the $ SHARC influenced it to $pysharc? I want to check the rest of the 60 traj. I hope my explanation was clearer than the explanations before. If not please let me know. and I can trie to be more specific. Best |
Hi Jakob, If you check ICOND_00000_org/SCRATCH/master_1/ORCA.log, you can find that the calculation has dramatic convergence problems:
At this point, ORCA switches to the TRAH procedure, but that also seems to have a problem that I do not know in more detail: In ICOND_00002_org, similar things happen, just worse:
In ICOND_00002_copied, the ORCA.log shows that the calculation does converge, but the convergence behaviour is really strange:
The bad convergence behaviour could be due to the bad initial orbitals given, which come from the corresponding ICOND_00000 calculation. Unfortunately, in ICOND_00000_copied, the ORCA.log does not show details (as you said, its a later attempt that failed). Overall, I cannot really figure out what is the problem. My suspicion is that on your new cluster, there is some problem with the ORCA installation (maybe some library it depends on), but I cannot be sure. Do similar (SHARC-free) ORCA calculations show such problems on the new cluster? You might also need to be careful when using gbw files from one cluster as input for the other cluster. About your new trajectory, I cannot find it on the cloud folder. So what I mostly did when I was checking was the following:
After looking again at your trajectory (00109), I would say that the hop at 23.00fs and the corresponding spike in the overlap matrix elements does look problematic. I did not check the last time, but this time I checked and this hop/spike coincides with one of the restarts, in particular a double restart according to the log file. However, I do not think that this is related to changing the $SHARC variable, which was the original question. Do you know at which time step the change in environment variable was done? Best, |
Dear Sebastian Thank you for the explanation and for looking at the files. Until now, I have not observed any convergence issues when running ORCA (SHARC-Free). The convergence issue in the ICOND_org is because I did not provide any .gbw. (I only wanted to check if the TRAJ was started correctly.) Is the error (UnboundLocalError: local variable 'gsenergy' referenced before assignment ) due to the convergency issue? Thank you as well for providing the points I need to go through for the rest of the 60 traj to check if they have any problems. That said, I believe I know why there is wired behavior at the 23fs: I included the nonequilibrated implicit solvent model until the 22.5 fs, and starting from 23 fs (till the end), I included the equilibrated one. This could be an explanation for the described behavior. I don't precisely recall the timing at which the change of variable happened, but I know it happened before 28 Feb (with a 14-day period where it did not run), and considering what approximately 10,000 sec per step. With this very rough approximation, we would get around the 50th step. Otherwise, I can't give you any more specific time steps that I changed the path. I will do the ICOND_org calculation with a .gbw file in the following and let you know, and I will upload the respective files as well. Maybe the "UnboundLocalError: local variable 'gsenergy' referenced before assignment" error will disappear. Best Jakob |
Hi, What you say about the change at 23fs makes sense. I would expect significant differences between non-eq and eq solvation treatment in PCM, in particular for bright states. I think this is not a good idea to make changes to the electronic structure settings during a trajectory. What you say about the time when you switched, as I said, I do not have the impression that the change in environment variable had any effect. Best, |
Dear all
I have realized that all the jobs I intended to run with the normal SHARC module have run (by accident). So, to be clear, I have set a path for pysharc to the pysharc bin and one for sharc to the sharc bin. I did, by accident, use the pysharc, even though I did all the preparation accordingly to the SHARC and have run it until some point with SHARC. Initially, I tried to run LVC SHARC with the same script, but I have not changed it until now. Is this a problem, or does it only depend on the option I chose when setting up the traj in the first place?
I hope it is the second.
If it helps to look at one of the traj folders let me know.
Thank you
Best
Jakob
The text was updated successfully, but these errors were encountered: