Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Negative nuclide number densities #22

Open
wbinventor opened this issue Feb 17, 2017 · 4 comments
Open

Negative nuclide number densities #22

wbinventor opened this issue Feb 17, 2017 · 4 comments

Comments

@wbinventor
Copy link
Member

wbinventor commented Feb 17, 2017

With my recent use of OpenDeplete, I've noticed that I often run into situations with negative number densities as reported by warning messages like the following:

WARNING! Negative Number Densities!
Material id =  61  nuclide  N13  number =  -0.0351477527858

This seems to happen most frequently with N13 Sb117 and Ag110m.

I'm not sure what might cause this - perhaps I'm not running enough histories to sufficiently reduce the uncertainties on all tallies? Or is there something to truly be concerned about?

@cjosey
Copy link
Collaborator

cjosey commented Feb 17, 2017

At least this one I have some direct understanding of the cause. The matrix exponentiation we use, CRAM, is very slightly oscillatory about zero for extremely large eigenvalues.

If you replace CRAM48 over dt with n steps of CRAM48 over dt/n each, you end up with a better method (http://dx.doi.org/10.13182/NSE15-67). When I did this, the magnitude by which number densities went negative decreased, up until a limit of numerical stability. As such, this error is (likely) purely numerical and unavoidable using the methods we currently use.

The upside of this is that even with the method we're currently using, that magnitude is very small. The value output is in units of atoms. So, there are -0.03 atoms of N13 in the entire cell. I would consider this sufficiently negligible to ignore. The worst negative value I've seen is -153 atoms.

I think I'll have to determine a threshold between which a number is "ok and probably due to numerics" and "completely wrong" (as in the case of unstable high order methods), and only output said error when it is in the latter category.

@wbinventor
Copy link
Member Author

Thanks for the clarification @cjosey, I'm glad to hear that this is not an issue to be overly concerned by. I'll leave this issue open for now in case you ever want to determine a threshold to throttle these warning messages.

@cjosey
Copy link
Collaborator

cjosey commented Mar 2, 2017

Rather humorously, I think I might have a more general solution to this. I was exploring whether or not I could implement CRAM with operators instead of matrices. This would further reduce RAM usage by a factor of 10.

As a consequence, I will eventually no longer be able to use spsolve with CRAM, so I was testing inverse-diagonal preconditioned LGMRES. Performance is comparable when operating on matrices (70% slower but I think I can optimize it more). More interestingly, at no point does the LGMRES version output a negative value.

There is a problem though. The maximum relative error between the methods is larger than 100. The particular case is

n_LGMRES = 0.00023972358355354759
n_spsolve = -0.03388601482405415

So, I'm not yet sure if it's really considered "wrong".

@bforget
Copy link

bforget commented Mar 2, 2017

Can you compare with using multiple CRAM steps to see if it would converge to the same value? However, this just seems like a new way of calculating 0, so probably nothing to worry about.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants