-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Negative nuclide number densities #22
Comments
At least this one I have some direct understanding of the cause. The matrix exponentiation we use, CRAM, is very slightly oscillatory about zero for extremely large eigenvalues. If you replace CRAM48 over dt with n steps of CRAM48 over dt/n each, you end up with a better method (http://dx.doi.org/10.13182/NSE15-67). When I did this, the magnitude by which number densities went negative decreased, up until a limit of numerical stability. As such, this error is (likely) purely numerical and unavoidable using the methods we currently use. The upside of this is that even with the method we're currently using, that magnitude is very small. The value output is in units of atoms. So, there are -0.03 atoms of N13 in the entire cell. I would consider this sufficiently negligible to ignore. The worst negative value I've seen is -153 atoms. I think I'll have to determine a threshold between which a number is "ok and probably due to numerics" and "completely wrong" (as in the case of unstable high order methods), and only output said error when it is in the latter category. |
Thanks for the clarification @cjosey, I'm glad to hear that this is not an issue to be overly concerned by. I'll leave this issue open for now in case you ever want to determine a threshold to throttle these warning messages. |
Rather humorously, I think I might have a more general solution to this. I was exploring whether or not I could implement CRAM with operators instead of matrices. This would further reduce RAM usage by a factor of 10. As a consequence, I will eventually no longer be able to use There is a problem though. The maximum relative error between the methods is larger than 100. The particular case is
So, I'm not yet sure if it's really considered "wrong". |
Can you compare with using multiple CRAM steps to see if it would converge to the same value? However, this just seems like a new way of calculating 0, so probably nothing to worry about. |
With my recent use of OpenDeplete, I've noticed that I often run into situations with negative number densities as reported by warning messages like the following:
This seems to happen most frequently with N13 Sb117 and Ag110m.
I'm not sure what might cause this - perhaps I'm not running enough histories to sufficiently reduce the uncertainties on all tallies? Or is there something to truly be concerned about?
The text was updated successfully, but these errors were encountered: