-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update plots #76
Update plots #76
Changes from 9 commits
a7d1e46
f1ef447
738cad1
dad6a8a
1eaac3e
8f582c6
9c4ce28
30ba442
c530e0b
7504922
7bb45a8
c0a1df8
691a282
a185078
64ea16e
35536e4
0980f87
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -14,38 +14,46 @@ | |
|
||
log = logging.getLogger(__name__) | ||
|
||
|
||
def cosh_shift(x, xi, A, c): | ||
return A * np.cosh(-x / xi) + c | ||
|
||
|
||
def fit_zero_momentum_correlator(zero_momentum_correlator, training_geometry): | ||
# TODO should I bootstrap this whole process...? | ||
|
||
T = training_geometry.length | ||
# TODO: would be good to specify this in runcard | ||
t0 = T // 4 | ||
window = slice(t0, T - t0 + 1) | ||
|
||
t = np.arange(T) | ||
y = zero_momentum_correlator.mean(axis=-1) | ||
yerr = zero_momentum_correlator.std(axis=-1) | ||
|
||
try: | ||
popt, pcov = optim.curve_fit( | ||
cosh_shift, | ||
xdata=t[window] - T // 2, | ||
ydata=y[window], | ||
sigma=yerr[window], | ||
def fit_zero_momentum_correlator( | ||
zero_momentum_correlator, training_geometry, cosh_fit_window=slice(1, None) | ||
jmarshrossney marked this conversation as resolved.
Show resolved
Hide resolved
|
||
): | ||
t = np.arange(training_geometry.length) - training_geometry.length // 2 | ||
|
||
# fit for each correlation func in the bootstrap ensemble | ||
optimised_parameters = [] | ||
for correlator in zero_momentum_correlator.transpose(): | ||
try: | ||
popt, pcov = optim.curve_fit( | ||
cosh_shift, | ||
xdata=t[cosh_fit_window], | ||
ydata=correlator[cosh_fit_window], | ||
) | ||
optimised_parameters.append(popt) | ||
except RuntimeError: | ||
pass | ||
|
||
n_boot = zero_momentum_correlator.shape[-1] | ||
failures = n_boot - len(optimised_parameters) | ||
if failures > 0: | ||
log.warning( | ||
f"Failed to fit cosh to correlation function for {failures}/{n_boot} members of the bootstrap ensemble." | ||
) | ||
return (popt, pcov, t0) | ||
except RuntimeError: | ||
log.warning("Failed to fit cosh to correlation function.") | ||
return None | ||
if failures >= n_boot - 1: | ||
log.warning("Too many failures: no fit parameters will be returned.") | ||
return None | ||
jmarshrossney marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
xi, A, c = np.array(optimised_parameters).transpose() | ||
return xi, A, c | ||
|
||
|
||
def correlation_length_from_fit(fit_zero_momentum_correlator): | ||
popt, pcov, _ = fit_zero_momentum_correlator | ||
return popt[0], np.sqrt(pcov[0, 0]) | ||
xi, _, _ = fit_zero_momentum_correlator | ||
return xi | ||
|
||
|
||
def autocorrelation(chain): | ||
|
@@ -111,7 +119,7 @@ def magnetic_susceptibility(magnetization, abs_magnetization_squared): | |
|
||
|
||
def magnetization_series(configs): | ||
return configs.sum(axis=1) | ||
return configs.sum(axis=1).numpy() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. would it make more sense to just cast the configs numpy earlier instead of multiple times on the observables? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. in other words, do we ever need There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yeah it would. I mean I think it only happens the once if you discount the unused correlator calculation, but it would probably be better to cast to numpy at the end of the sampling. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd rather just leave this niggle until we're messing around inside |
||
|
||
|
||
def magnetization_autocorr(magnetization_series): | ||
|
@@ -144,7 +152,9 @@ def __two_point_correlator( | |
axis=-1 # sample average | ||
) | ||
|
||
return correlator.reshape((training_geometry.length, training_geometry.length, -1)) | ||
return correlator.reshape( | ||
(training_geometry.length, training_geometry.length, -1) | ||
).numpy() | ||
|
||
|
||
def two_point_correlator( | ||
|
@@ -210,16 +220,6 @@ def ising_energy(two_point_correlator): | |
return (two_point_correlator[1, 0] + two_point_correlator[0, 1]) / 2 | ||
|
||
|
||
def inverse_pole_mass(effective_pole_mass, training_geometry): | ||
T = training_geometry.length | ||
t0 = T // 4 | ||
window = slice(t0, T - t0 + 1) | ||
|
||
xi = np.reciprocal(effective_pole_mass)[window] | ||
|
||
return np.nanmean(xi, axis=0) # average over "large" t points | ||
|
||
|
||
def second_moment_correlation_length(two_point_correlator, susceptibility): | ||
"""Second moment correlation length, defined as the normalised second | ||
moment of the two point correlator.""" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can this not just take the
lattice_length
directly?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, good point
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can replace
training_geometry.length
withlattice_length
in this parse function, but not the production rule directly below. I'm sure by now I should understand this behaviour but alas, I forget.Also, since we're moving towards removing
training_context
(which contains the entire training runcard) in favour of things more akin totraining_geometry
which just extract the specific parameters we need (#62), perhaps it does make sense to leave this as is.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's really weird, I don't understand that..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess leave it for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
apologies, it doesn't work with either the parse or produce - the parse just worked because I was using the default value for
cosh_fit_min_separation
incosh_fit_window
..There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's because that key is only present in the training context and so we could in theory put lattice length but we'd have to collect over the fit or write the key in the sampling runcard, idk why I didn't think about this earlier.
So definitely leave it for now and I will look into extracting parameters from the trained model in a more satisfactory way..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aka the issue you linked.