\maketitle
- Looking for a means of predicting segregation profiles accurately for CuPd and other metal systems.
- Top layer segregation of Pd70Cu30 is well predicted by the Langmuir-McLean equation:
-
$y$ is top layer composition of Cu -
$x$ is bulk composition of Cu -
$Δ G$ is the Gibbs free energy of segregation for Cu - Accurate computational representation of these
- Furthermore, experiments observe a Cu deficient second layer.
- This is difficult to reproduce through basic simulations cite:dowben-1990-surfac-segreg-phenom.
- Simulation methods traditionally use Monte Carlo (MC) techniques.
- DFT is poorly suited for predicting segregation due to computational expense of MC methods.
- kMC methods have been recently implemented cite:cheng-2015-kinet-monte which accurately reproduces experimental surface compositions shown in Figure ref:fig-kmc-segregation.
- Neural Networks (NN) are still an appealing solution, since they can be expanded upon.
- NN are also not limited to fixed lattice sites (slabs can be relaxed).
- NNs require a basis of calculations descriptive of all configurations of CuPd to begin training.
- Training to cluster expansion structures is ineffective:
– Relatively small configuration space explored – NNs struggle to predict “Needle” like structures – Mike Widom suggests this is likely due to poor k-point sampling
- For a small unit cell of 100 atoms there are over 1.2 × 10$30$ configurations of Cu and Pd.
– Not all of these are energy unique – CuPd has no appreciable long-range interactions, so many of these are irrelevant for training purposes.
- NNs will only fit to configurations of relatively small unit cells because of $Rc$
- Energy unique configurations can be determined manually, by unit cell geometry (not recommend).
- Alternatively, ANY system which predicts consistent energy output from identical position of atoms input will do the same. (i.e. any potential)
- Effective medium theory (EMT) is a very cheap and simple potential, capable of predictions on CuPd.
from ase.visualize import view
from ase.lattice.surface import fcc111_root
from ase.io import write
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.offsetbox import OffsetImage, AnnotationBbox
import matplotlib.image as mpimg
import os
lat = np.linspace(3.634, 3.939, 5)
p = np.linspace(0.05, 0.9, 5)
fig = plt.figure(figsize=(8, 3))
ax = fig.add_subplot(111)
for i, a in enumerate(lat):
atoms = fcc111_root('Cu', root=7,
size=(1, 1, 5),
vacuum=6.0, a=a)
col = abs((atoms.get_tags() / 5.0) - 1.0)
kwargs = {
'rotation': "-75x",
'show_unit_cell': 2,
'colors': np.array([col, col, col]).T,
'radii': [0.3] * len(atoms)}
write('images/temp.png', atoms, **kwargs)
image = mpimg.imread('./images/temp.png')
imagebox = OffsetImage(image, zoom=0.5)
ax.add_artist(AnnotationBbox(imagebox,
xy=(0, 0),
xybox=(p[i], 0.5),
pad=-0.2,
frameon=False,
arrowprops=None)
)
ax.text(p[i], -0.13, '{0:1.3f} $\AA$'.format(a),
va='center', ha='center')
os.unlink('./images/temp.png')
fig.patch.set_visible(False)
ax.axis('off')
plt.tight_layout()
plt.savefig('images/structure-types.png')
from ase.visualize import view
from ase.lattice.surface import fcc111_root
from ase.io import write
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.offsetbox import OffsetImage, AnnotationBbox
import matplotlib.image as mpimg
import os
p = np.linspace(0.2, 0.8, 2)
fig = plt.figure(figsize=(4, 3))
ax = fig.add_subplot(111)
for i, M in enumerate(['Cu', 'Pd']):
atoms = fcc111_root(M, root=7,
size=(1, 1, 5),
vacuum=6.0,
a=3.787)
C = [200/255., 129/255., 51/255.]
B = [0/255., 105/255., 134/255.]
if i == 0:
col = [C] * 31 + [B] * 4
else:
col = [B] * 31 + [C] * 4
kwargs = {
'rotation': "-75x",
'show_unit_cell': 2,
'colors': col,
'radii': None}
write('images/temp.png', atoms, **kwargs)
image = mpimg.imread('./images/temp.png')
imagebox = OffsetImage(image, zoom=0.6)
ax.add_artist(AnnotationBbox(imagebox,
xy=(0, 0),
xybox=(p[i], 0.5),
pad=-0.2,
frameon=False,
arrowprops=None))
os.unlink('./images/temp.png')
fig.patch.set_visible(False)
ax.axis('off')
plt.tight_layout()
plt.savefig('images/composition-types.png')
- Third NN iteration trained to ≈ 15,000 images of up to 2-layer unique energy configurations.
- Results to ground state structures shown in Figure ref:fig-2layer-db0.
- Figures demonstrate the energy difference between NNs of different framework.
– Difference is proportional to the accuracy of the fit.
- NN then used to predict energies of up to 4-layer unique energy configurations as shown in Figure ref:fig-4layer-db0.
- Amazingly good agreement to “extrapolated” configurations.
- All structures NN differences above 0.04 eV/atom added to second
- Results shown in Figure ref:fig-alayer-db1.
- 5-layer unique energy configurations of 2 × 2 slab are also well predicted as shown in Figure ref:fig-db3-2x2-5layer.
- Significant improvement to energy predictions with addition of only ≈ 2500 images to NN.
- Although r7 × r7 slab NN is robust for CuPd compositions at any width, adding or subtracting layers results in NN failure.
- Slabs of more than 5 layers are necessary for Canonical Ensemble MC.
- Bulk chemical potentials are needed for Grand Canonical Ensemble MC.
- Enumeration of bulk configurations is the next logical step.
- Unique energy configurations of 3 × 3 primitive fcc unit cell enumerated using EMT as described previously.
- Performed for Cu, Pd, and single intermediate lattice constant (≈ 5000 images).
- Initially unclear whether to separate bulk NN from slab NN.
- Predictions from NN trained only to bulk images did not interpolate to other lattice constants well.
- Figure ref:fig-db4_bulk_5lat shows NN energy differences for all unique configurations at 5 linearly interpolated lattice constants.
- Figure ref:fig-db5_bulk_10lat demonstrates unique energy bulk configurations for NN trained with > 25,000 slab and bulk images.
- All unique energy lattices were then scaled to alattice constant corresponding to their composition through linear interpolation.
- Errors are very small, indicating the NN is ready for use at fixed lattice constant.
from ase.db import connect
import matplotlib.pyplot as plt
import numpy as np
from ase.visualize import view
from matplotlib._png import read_png
from ase.io import write
from matplotlib.offsetbox import TextArea, DrawingArea, OffsetImage, \
AnnotationBbox
from matplotlib.widgets import Slider
db = connect('temp/vegard-3x3.db')
data = {}
ID, LAT, ERR = [], [], []
for d in db.select():
ERR += [(d.data.NN6 - d.data.NN7) / d.natoms]
ID += [d.id]
LAT += [d.a]
if d.a not in data.keys():
data[d.a] = np.array([d.id,
d.data.NN6,
d.data.NN7,
(d.data.NN6 - d.data.NN7) / d.natoms])
else:
data[d.a] = np.vstack([data[d.a], np.array([d.id,
d.data.NN6,
d.data.NN7,
(d.data.NN6 - d.data.NN7) / d.natoms])])
mins = []
fig = plt.figure(figsize=(6, 4))
ax = fig.add_subplot(111)
ax.plot([3.634, 3.939], [0, 0], 'k--', lw=2)
ax.set_title('click on a point')
l, = ax.plot(LAT, ERR, 'bo', picker=5)
for lat, v in data.iteritems():
error = v.T[-1]
if isinstance(error, float):
error = np.array([error])
# find the minimum energy structure
try:
mc1 = int(list(v.T[1]).index(v.T[1].min()))
mc2 = int(list(v.T[2]).index(v.T[2].min()))
mins += [int(v.T[0][mc2])]
except(TypeError):
mc1, mc2 = 0, 0
mins += [int(v.T[0])]
# ax.plot(np.zeros(error.shape) + lat, error, 'bo')
ax.plot(lat, error[mc1], 'ro')
ax.plot(lat, error[mc2], 'go')
def onpick(event):
ind = event.ind
ax.set_title('Atom index {}'.format(ID[ind]))
atoms = db.get_atoms(ID[ind])
write('./images/temp-atoms.png', db.get_atoms(ID[ind]))
arr_lena = read_png('./images/temp-atoms.png')
imagebox = OffsetImage(arr_lena, zoom=0.35)
ab = AnnotationBbox(imagebox, [3.85, 0.0055], frameon=False)
ax.add_artist(ab)
ax.figure.canvas.draw() # this line is critical to change the linewidth
plt.savefig('./images/vegard-error-bulk.png')
if event.mouseevent.button == 3:
view(atoms)
fig.canvas.mpl_connect('pick_event', onpick)
ax.set_xlim(3.634, 3.939)
ax.set_xlabel('fcc lattice constant ($\AA$)')
ax.set_ylabel('Difference in neural networks (eV/atom)')
plt.tight_layout()
plt.savefig('./images/vegard-error-bulk.png')
plt.show()
import numpy as np
from ase.db import connect
import matplotlib.pyplot as plt
from ase.visualize import view
from amp import Amp
from ase.lattice.cubic import FaceCenteredCubic as fcc
from scipy.optimize import curve_fit
calc = Amp('networks/db6/40-7-7-1/trained-parameters.json')
c = ['b', 'm', 'r']
ref = {}
for A, a in [['Cu', 3.634], ['Pd', 3.939]]:
atoms = fcc(A,
directions=[[0, 1, 1],
[1, 0, 1],
[1, 1, 0]],
latticeconstant=a)
atoms.set_calculator(calc)
ref[A] = atoms.get_potential_energy()
fig, ax = plt.subplots()
for i, T in enumerate([700, 800, 900]):
comp, avg, std = [0.], [0], [0]
for A in np.linspace(6, 102, 17)[::2]:
db = connect('MC/T{}-{}.db'.format(T, int(A)))
E = []
for d in db.select():
nCu = d.symbols.count('Cu')
nPd = d.symbols.count('Pd')
E += [(d.nrg - ref['Cu'] * nCu - ref['Pd'] * nPd) / d.natoms]
E = np.array(E)
avg += [np.mean(E)]
std += [np.std(E)]
comp += [A/108.]
avg = np.array(avg + [0])
std = np.array(std + [0])
comp = np.array(comp + [1.])
def Margules(x, A, B, C, D):
R = 8.31446 # J/mol/K
temp = T # K
return (A + B*(x - (1 - x)) + C*(x - (1 - x))**2. + D*(x - (1 - x))**3.)*(1 - x)*x*R*temp
from scipy.optimize import curve_fit
x = np.linspace(0, 1, 100)
p, _ = curve_fit(Margules, comp, avg)
plt.plot(x, Margules(x, *p), color=c[i])
plt.scatter(comp, avg, label=str(T) + ' K', color=c[i])
ax.fill_between(comp, avg + std, avg - std, alpha=0.5, color=c[i])
plt.legend(loc='best')
plt.xlim(0, 1)
plt.xlabel('Cu$_{1-x}$Pd$_{x}$ composition (x)')
plt.ylabel('Heat of formation (eV/atom)')
plt.ylim(-0.12, 0)
plt.tight_layout()
plt.savefig('./images/gs-hull-3temp.png')
The most poorly fit structure is included below in Figure ref:fig-poor-fit.
from ase.db import connect
import matplotlib.pyplot as plt
import numpy as np
db = connect('temp/db6.db')
ECu = db.get(['Cu=1', 'Pd=0'])
EPd = db.get(['Cu=0', 'Pd=1'])
C, Qe, Ne = [], [], []
for d in db.select(['atat']):
nPd = d.symbols.count('Pd')
nCu = d.symbols.count('Cu')
C += [float(nPd) / (nPd + nCu)]
Qe += [(d.nrg - nPd*EPd.nrg - nCu*ECu.nrg) / d.natoms]
Ne += [(d.nnE - nPd*EPd.nnE - nCu*ECu.nnE) / d.natoms]
Qe = np.array(Qe)
Ne = np.array(Ne)
# View the most poorly fit structure
from ase.visualize import view
Err = abs(Ne - Qe)
i = list(Err).index(max(Err))
from ase.io import write
write('./images/atat-poorfit.png', db.get_atoms(i), show_unit_cell=2)
fig, ax = plt.subplots(2, sharex=True, figsize=(6, 4))
ax[0].scatter(C, Qe, color='k')
ax[0].scatter(C, Ne, color='b')
ax[1].scatter(C, Ne - Qe, color='r')
ax[0].set_xlim(0, 1)
ax[0].set_ylim(-0.12, 0.12)
ax[1].set_ylim(-0.15, 0.05)
ax[1].set_yticks(ax[1].get_yticks()[:-1])
ax[1].set_xlabel('Composition Cu$_{1-x}$Pd$_{x}$, ($x$)')
ax[0].set_ylabel('Heat of formation\n(eV/atom)')
ax[1].set_ylabel('Error\n(eV/atom)')
plt.tight_layout(h_pad=0.0)
plt.savefig('./images/db6-cluster-expansion.png')
The first ≈ 2000 calculations are included from the ATAT enumerated structures. These structures have “sharp” unit cells which have historically been difficult for Neural networks to fit to.
from ase.db import connect
import matplotlib.pyplot as plt
import numpy as np
db = connect('temp/db6.db')
ECu = db.get(['Cu=1', 'Pd=0'])
EPd = db.get(['Cu=0', 'Pd=1'])
ID, C, dE, nndE = [], [], [], []
for d in db.select([]):
nPd = d.symbols.count('Pd')
nCu = d.symbols.count('Cu')
ID += [d.id]
C += [float(nPd) / (nPd + nCu)]
dE += [(d.nnE - d.nrg) / d.natoms]
nndE += [d.nndE / d.natoms]
fig, ax = plt.subplots(1, 2, sharey=True, figsize=(6, 4))
ax[0].scatter(ID, dE, color='r')
ax[1].scatter(ID, nndE, color='b')
ax[0].set_xlim(0, max(ID))
ax[1].set_xlim(0, max(ID))
ax[0].set_xticks(ax[0].get_xticks()[1:])
ax[1].set_xticks(ax[1].get_xticks()[1:])
fig.autofmt_xdate()
ax[0].set_xlabel('Composition Cu$_{1-x}$Pd$_{x}$, ($x$)')
ax[1].set_xlabel('Composition Cu$_{1-x}$Pd$_{x}$, ($x$)')
ax[0].set_ylabel('Error (eV/atom)')
plt.tight_layout(w_pad=-0.1)
plt.savefig('./images/db6-error.png')
from ase.db import connect
import matplotlib.pyplot as plt
import numpy as np
db = connect('temp/db6.db')
ECu = db.get(['Cu=1', 'Pd=0']).nrg
EPd = db.get(['Cu=0', 'Pd=1']).nrg
C, E = [], []
for d in db.select(['atat']):
nPd = d.symbols.count('Pd')
nCu = d.symbols.count('Cu')
C += [float(nPd) / (nPd + nCu)]
E += [(d.nrg - nPd*EPd - nCu*ECu) / d.natoms]
E = np.array(E)
d = {c:[] for c in sorted(set(C))}
for c, e in zip(C, E):
d[c] += [e]
md = []
for c in sorted(d.keys()):
md += [[c, min(d[c])]]
md = np.array(md)
gs = [0]
while gs[-1] != 32:
slopes = []
for c, e in md[gs[-1]+1:]:
x = [md[gs[-1]][0], c]
y = [md[gs[-1]][1], e]
slopes += [np.polyfit(x, y, 1)[0]]
gs += [gs[-1] + 1 + slopes.index(min(slopes))]
def Margules(x, A, B, C, D):
R = 8.31446 # J/mol/K
temp = 1. # K
return (A + B*(x - (1 - x)) + C*(x - (1 - x))**2. + D*(x - (1 - x))**3.)*(1 - x)*x*R*temp
from scipy.optimize import curve_fit
x = np.linspace(0, 1, 100)
p, _ = curve_fit(Margules, md.T[0], md.T[1])
fig, ax = plt.subplots(figsize=(6, 4))
ax.scatter(md.T[0], md.T[1], color='b', label='Low energy cfgs.')
ax.plot(x, Margules(x, *p), color='b')
ax.scatter(md[gs].T[0], md[gs].T[1], color='r', label='Ground state hull')
lfits = []
for i, j in enumerate(md[gs].T[0][:-1]):
fit = np.poly1d(np.polyfit(md[gs].T[0][i:i+2],
md[gs].T[1][i:i+2], 1))
lfits += [fit]
xtemp = np.linspace(md[gs].T[0][i], md[gs].T[0][i+1])
ax.plot(xtemp, fit(xtemp), 'r-')
ax.set_xlim(0, 1)
ax.set_ylim(-0.12, 0)
ax.set_xlabel('Composition Cu$_{1-x}$Pd$_{x}$, ($x$)')
ax.set_ylabel('Heat of formation\n(eV/atom)')
ax.legend(loc='best')
plt.tight_layout()
plt.savefig('./images/temp.png')
- The NN is known to work well for 5 layers slabs, so we began with similar simulations (10 × 10 × 5).
- In this case the bottom two layers are fixed to the ideal bulk configuration at 1/4th Cu.
- Simulations performed using 10 × 10 × 20 slabs were also attempted.
- These simulations are much slower, and appear to (incorrectly) predict Pd segregation as well.
[[./images/T800-Pd30.png]]
There are two primary reasons sources of error that may be contributing to the incorrect Pd segregation:
- Errors in the fitted NN energy
- Neglecting relaxations energies
- All slab calculations shown above are unit cells of at least 500 atoms. This is too many to perform direct DFT validation on.
- Instead we can validate using predictions from as second NN with different framework
- Errors from the second neural network appear significant at first glance
– likely because configurational energy differences are small
- Predictions are relatively consistent for all configurations predicted by the first NN.
– This indicates the predictions are accurate to DFT within the fitting error (5meV/atom)
- Validation against “dummy” unit cells of much smaller size can also be done.
- This way we can compare DFT energies directly to the NN predictions.
- Calculations are not yet complete.
- Pd segregation may also be predicted due to neglecting relaxations energy in the MC
- Calculation times are much larger than that of more traditional MC methods.
- Calculations for the same unit cells with additional relaxations were started this morning, and have yet to finish.
– performing an additional relaxation step on the MC is NOT feasible.
import matplotlib.pyplot as plt
from ase.units import kB
import numpy as np
from matplotlib.widgets import Slider, Button, RadioButtons
fig, ax = plt.subplots()
plt.subplots_adjust(bottom=0.2)
dE = np.arange(0, 0.5, 0.001)
T0 = 273.15
E0 = 0
def P(dE, T):
return np.exp(-dE / (kB * T))
line, = plt.plot(dE, P(dE, T0), 'r-', lw=2)
plt.axis([0, 0.5, 0, 1])
marker, = plt.plot([E0], [P(E0, T0)], 'ro', ms=5)
axT = plt.axes([0.15, 0.05, 0.72, 0.03])
# Slider object
sT = Slider(axT, 'Temp (K)', 0, 1000, valinit=T0)
def update(val):
temp = sT.val
line.set_ydata(P(dE, temp))
fig.canvas.draw_idle()
sT.on_changed(update)
def onmove(event):
x = event.xdata
temp = sT.val
ax.set_title('dE: {0:1.3f}\nProbability: {1:1.3f}'.format(x, P(x, temp)))
marker.set_xdata(x)
marker.set_ydata(P(x, temp))
ax.figure.canvas.draw()
mv = fig.canvas.mpl_connect('motion_notify_event', onmove)
ax.set_xlabel('$\Delta$E (eV)')
ax.set_ylabel('Probability')
plt.savefig('./images/boltzmann.png')
plt.show()
- Why not use EMT for all CuPd work?
from ase.lattice.tetragonal import CenteredTetragonal as bct
from ase.db import connect
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
from matplotlib import cm
from jasp import *
from jbutil import makedb
JASPRC['queue.walltime'] = '24:00:00'
# Retrive the data
db = connect('data.db')
A, C = [], []
E = {}
for d in db.select([]):
if d.a not in E.keys():
E[d.a] = {}
if d.ca not in E[d.a].keys():
E[d.a][d.ca] = d.total_energy
A.append(d.a)
C.append(d.ca)
with jasp('DFT/bulk=fcc/config=3/xc=PBE/gga=PS') as calc:
atoms = calc.get_atoms()
cell = atoms.get_cell()
efcc = atoms.get_potential_energy()
afcc = np.linalg.norm(cell[0])
with jasp('DFT/bulk=bcc/config=3/xc=PBE/gga=PS') as calc:
atoms = calc.get_atoms()
cell = atoms.get_cell()
ebcc = atoms.get_potential_energy()
abcc = np.linalg.norm(cell[0])
uA = np.unique(A)
uC = np.unique(C)
X, Y = np.meshgrid(uA, uC)
Z = np.zeros(X.shape)
for i, a in enumerate(uA):
for j, c in enumerate(uC):
Z[j][i] = E[a][c]
fccZ, bccZ = [], []
eminZ = []
cminZ = []
aminZ = []
for i, c in enumerate(uC[5:-5]):
data = Z[i+5, :]
ind = data.tolist().index(min(data))
eminZ.append(min(data))
cminZ.append(c)
aminZ.append(uA[ind])
rng = [ebcc, -10.2]
Z[Z > rng[1]] = np.nan
fig = plt.figure(figsize=(8, 6))
ax = fig.gca(projection='3d')
CM = cm.autumn
cset = ax.contourf(X, Y, Z, zdir='z', offset=-10.8, cmap=CM, vmin=rng[0], vmax=rng[1])
ax.plot_surface(X, Y, Z,
rstride=1,
cstride=1,
cmap=CM,
linewidth=0,
vmin=rng[0],
vmax=rng[1])
ax.scatter(aminZ[1:-1], cminZ[1:-1], eminZ[1:-1], c='k')
ax.plot([afcc, afcc], [np.sqrt(2), np.sqrt(2)], [-10.8, eminZ[-1]], 'go-', zorder=99)
ax.text(afcc, np.sqrt(2), eminZ[-1]+0.01, 'fcc', color='g', zorder=99, size='large')
ax.plot([abcc, abcc], [1.0, 1.0], [-10.8, eminZ[0]], 'bo-', zorder=99)
ax.text(abcc, 1.0, eminZ[0]+0.01, 'bcc', color='b', zorder=99, size='large')
ax.set_xlabel('a')
ax.set_xlim(2.4, 3.2)
ax.set_ylabel('c/a')
ax.set_ylim(0.8, 1.6)
ax.set_zlabel('Total energy (eV)')
ax.set_zlim(-10.8, rng[1])
plt.tight_layout()
plt.savefig('images/3D-bcc-pathway.png')
for i, ca in enumerate(cminZ):
atoms = bct('Cu', latticeconstant={'a': aminZ[i], 'c/a': ca})
atoms[1].symbol = 'Pd'
wd = 'DFT/bulk=bct/config=3/pathway=True/ca={0}/xc=PBE/gga=None'.format(ca)
with jasp(wd,
xc='PBE',
encut=400,
kpts=(12, 12, 12),
nsw=20,
ibrion=2,
isif=7,
ediff=1e-9,
atoms=atoms) as calc:
try:
calc.calculate()
except(VaspQueued, VaspSubmitted):
pass
from ase.lattice.tetragonal import CenteredTetragonal as bct
import numpy as np
from asap3 import EMT
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
from matplotlib import cm
aref = [float(eos) for eos in eos[1:-2].split(',')]
# Create a list of sample points
ca0 = np.linspace(0.9, 1.5, 98)
ca0 = np.append(ca0, [1.0, np.sqrt(2)])
a0 = np.linspace(2.5, 3.1, 98)
a0 = np.append(a0, [aref[0], aref[1]])
a0 = np.unique(a0)
ca0 = np.unique(ca0)
X, Y = np.meshgrid(a0, ca0)
xifcc = list(X[0]).index(aref[0])
yifcc = list(Y.T[0]).index(np.sqrt(2))
xibcc = list(X[0]).index(aref[1])
yibcc = list(Y.T[0]).index(1.0)
Z = np.zeros(X.shape)
for i, a in enumerate(a0):
for j, ca in enumerate(ca0):
# Generate structure based on sample points
atoms = bct('Cu', latticeconstant={'a': a, 'c/a': ca})
atoms[1].symbol = 'Pd'
atoms.set_calculator(EMT())
Z[j][i] += [atoms.get_potential_energy()]
eminZ, cminZ, aminZ = [], [], []
for i, c in enumerate(ca0[yibcc:yifcc+1]):
data = Z[i+yibcc, :]
ind = list(data).index(min(data))
eminZ.append(min(data))
cminZ.append(c)
aminZ.append(a0[ind])
delta = (max(eminZ) - min(eminZ))
rng = [min(eminZ), max(eminZ) + delta]
Z[Z > rng[1]] = np.nan
fig = plt.figure(figsize=(8, 6))
ax = fig.gca(projection='3d')
CM = cm.autumn
cset = ax.contourf(X, Y, Z, zdir='z', offset=rng[0] - delta, cmap=CM, vmin=rng[0], vmax=rng[1])
ax.plot_surface(X, Y, Z,
rstride=1,
cstride=1,
cmap=CM,
linewidth=0,
vmin=rng[0],
vmax=rng[1])
ax.scatter(aminZ[1:-1], cminZ[1:-1], eminZ[1:-1], c='k')
ax.plot([aref[0], aref[0]], [np.sqrt(2), np.sqrt(2)], [rng[0] - delta, eminZ[-1]], 'go-', zorder=99)
ax.text(aref[0], np.sqrt(2), eminZ[-1]+0.01, 'fcc', color='g', zorder=99, size='large')
ax.plot([aref[1], aref[1]], [1.0, 1.0], [rng[0] - delta, eminZ[0]], 'bo-', zorder=99)
ax.text(aref[1], 1.0, eminZ[0]+0.01, 'bcc', color='b', zorder=99, size='large')
ax.set_xlabel('a')
ax.set_xlim(2.4, 3.2)
ax.set_ylabel('c/a')
ax.set_ylim(0.8, 1.6)
ax.set_zlabel('Total energy (eV)')
ax.set_zlim(rng[0] - delta, rng[1])
plt.tight_layout()
plt.savefig('./images/3D-EMT-pathway.png')
plt.figure()
plt.plot(cminZ, eminZ, 'k-')
plt.xlim(min(cminZ), max(cminZ))
plt.annotate('bcc', xy=(cminZ[0], eminZ[0]),
xytext=(cminZ[0] + .02, eminZ[0] - .005),
size=20, ha='left', arrowprops=dict(arrowstyle='->'))
plt.annotate('fcc', xy=(cminZ[-1], eminZ[-1]),
xytext=(cminZ[-1] - .02, eminZ[-1] + .005),
size=20, ha='right', arrowprops=dict(arrowstyle='->'))
plt.xlabel('c/a ratio')
plt.ylabel('Total energy (eV)')
plt.savefig('./images/2D-EMT-pathway.png')
import os
import numpy as np
cmd = ''
for A in np.linspace(12, 96, 8):
script = """#!/usr/bin/env python
import numpy as np
from ase.atoms import Atoms
import random
from ase.units import kB
from ase.db import connect
def main(atoms, dbname, T=800, steps=10000):
db = connect(dbname)
# Setting up variables for Cannonical MC
symbols = atoms.get_chemical_symbols()
chem_bins = {1}
for i, s in enumerate(symbols):
chem_bins[s] += [i]
# Calculate the initial energy and store it
nrg = atoms.get_potential_energy()
dummy = Atoms.copy(atoms)
db.write(dummy, nrg=nrg)
# Perform MC steps
attempt, success = 0, 0
while success < steps:
# First, choose two chemicals to swap
sym1, sym2 = random.sample(chem_bins.keys(), 2)
random.shuffle(chem_bins[sym1])
ind1 = chem_bins[sym1][-1]
random.shuffle(chem_bins[sym2])
ind2 = chem_bins[sym2][-1]
# Create new atoms object to test
new_atoms = Atoms.copy(atoms)
new_atoms.set_calculator(atoms.get_calculator())
# Update the atoms object
new_atoms[ind1].symbol, new_atoms[ind2].symbol = sym2, sym1
# Calculate the energy of the new system
new_nrg = new_atoms.get_potential_energy()
# Stores energy in continuously growing list
# potentially memory intensive, but faster than writing to disk
if new_nrg < nrg:
atoms = new_atoms
nrg = new_nrg
chem_bins[sym2][-1] = ind1
chem_bins[sym1][-1] = ind2
dummy = Atoms.copy(atoms)
db.write(dummy, nrg=nrg)
success += 1
elif np.exp(-(new_nrg - nrg) / (kB * T)) > np.random.rand():
atoms = new_atoms
nrg = new_nrg
chem_bins[sym2][-1] = ind1
chem_bins[sym1][-1] = ind2
dummy = Atoms.copy(atoms)
db.write(dummy, nrg=nrg)
success += 1
attempt += 1
return success/attempt
from ase.lattice.cubic import FaceCenteredCubic as fcc
from amp import Amp
atoms = fcc('Cu', size=(3, 3, 3),
latticeconstant=(3.939-3.634)*({0}/108.) + 3.634)
for i in range(int({0})):
atoms[i].symbol = 'Pd'
atoms.set_calculator(Amp('/home/jacob/research/cluster-expansion/networks/db5/40-7-7-1/trained-parameters.json'))
main(atoms, '/home/jacob/research/cluster-expansion/MC/108-{0}.db', steps=10000)
""".format(int(A), '{sym: [] for sym in set(symbols)}')
name = 'MC/run-{}.py'.format(int(A))
cmd += '{} & '.format(name.replace('MC', '.'))
# No copied files
if os.path.exists(name):
os.unlink(name)
with open(name, 'w') as f:
f.write(script)
os.chmod(name, 0755)
print(cmd)