Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add density cutoff option to fix FDBM for all-electron densities. #313

Open
wants to merge 16 commits into
base: main
Choose a base branch
from

Conversation

NikoOinonen
Copy link
Collaborator

@NikoOinonen NikoOinonen commented Oct 18, 2024

Fixes #276

Adds an option to cut off high-density values in FDBM Pauli integral. This enables using the FDBM simulation with all-electron densities where high-density values near nuclei cause artifacts in the resulting images.

…rameters.

Not releasing the tip density arrays and not redoing the FFT every time makes the simulation run 2-3x faster.
Copy link
Collaborator

@ondrejkrejci ondrejkrejci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, having several questions on this.

ppafm/cli/gui/ppafm_gui.py Show resolved Hide resolved
ppafm/ml/Generator.py Show resolved Hide resolved
ppafm/ml/Generator.py Show resolved Hide resolved
ppafm/ml/Generator.py Show resolved Hide resolved
ppafm/ml/Generator.py Show resolved Hide resolved
maximum,
)
# fmt: on
elif clamp_type == "soft":
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice possibility of the soft clamping - is it useable through CLI?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I only implemented here for the GPU when I was testing different clamping types. I decided that the soft clamp is not really necessary, so I didn't bother putting it elsewhere. Do you think this would be useful as a function in the CPU code?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, this is something we should talk about in the meeting. As in principle, I like to have a lot of possibilities, there is already a lot of code unused, that is why.
Good for now,

ppafm/ocl/field.py Show resolved Hide resolved
ppafm/ocl/field.py Show resolved Hide resolved
ppafm/ocl/field.py Show resolved Hide resolved
ppafm/ocl/field.py Show resolved Hide resolved
Copy link
Collaborator

@ondrejkrejci ondrejkrejci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good to go from my point of view.

@ProkopHapala
Copy link
Collaborator

I was recently using different soft_clamp function which is simpler (no need for exp) I think, but perhaps as long as it works it does not matter.

I was trying to compare it with yours, but not sure if I use yours correctly.
https://colab.research.google.com/drive/1zM1QtqKDIx7ULy8geUq7FPVWitqfwK7R?usp=sharing

import numpy as np
import matplotlib.pyplot as plt

def soft_clamp(y, dy, y1, y2):
    """
    Applies a soft clamp to y, smoothly transitioning values above y1 towards y2.
    Also computes the derivative dy accordingly using the chain rule.

    Parameters:
    - y: np.ndarray, input values to clamp
    - dy: np.ndarray, derivatives of y with respect to some variable x
    - y1: float, lower threshold for clamping
    - y2: float, upper threshold for clamping

    Returns:
    - y_new: np.ndarray, clamped y values
    - dy_new: np.ndarray, updated derivatives
    """
    y_new  = y.copy()
    dy_new = dy.copy()
    mask   = y_new > y1
    y12    = y2 - y1
    invdy  = 1.0 / y12
    z = (y_new[mask] - y1) * invdy
    y_new[mask]   = y1 + y12 * (1 - 1 / (1 + z))
    dy_new[mask] *= 1.0 / (1.0 + z)**2
    return y_new, dy_new

def soft_clamp_exp(array_in, min_value, max_value, width=10.0):
    """
    Soft clamp function using NumPy masks, smoothly clamping values.

    Parameters:
    - array_in: np.ndarray, input array
    - min_value: float, minimum clamp value
    - max_value: float, maximum clamp value
    - width: float, width of the transition region

    Returns:
    - array_out: np.ndarray, output array after applying the soft clamp
    """
    array_out = array_in.copy()

    # Mask for values greater than max_value
    if max_value < np.inf:
        mask_max = array_out > max_value
        array_out[mask_max] = (
            (array_out[mask_max] - max_value) / 
            (1 + np.exp((array_out[mask_max] - max_value) / width)) + max_value
        )
    
    # Mask for values less than min_value
    if min_value > -np.inf:
        mask_min = array_out < min_value
        array_out[mask_min] = (
            (array_out[mask_min] - min_value) / 
            (1 + np.exp((array_out[mask_min] - min_value) / -width)) + min_value
        )

    return array_out


# Example usage:
# y(x) = 1 / (x - 2)^6, derivative dy/dx = -6 / (x - 2)^7

# Define the domain, avoiding x = 2 to prevent division by zero
x  = np.linspace(0.0, 4.0, 1000)  # Start slightly above 2
y  = 1. / (x - 2.)**6
dy = -6. / (x - 2.)**7

y1 = 1.0
y2 = 3.0

# Apply the soft clamp
y_clamped, dy_clamped = soft_clamp(y, dy, y1, y2 )

y_clamped_exp = soft_clamp_exp(y, -y1, y1, width=(y2-y1) )

# Plotting the results
plt.figure(figsize=(12, 6))

# Plot y and its clamped version
plt.subplot(1, 2, 1)
plt.plot(x, y, label=r"$y(x) = \frac{1}{(x-2)^6}$", color='blue', alpha=0.5)
plt.plot(x, y_clamped, label="Clamped y", color='red')
plt.plot(x, y_clamped_exp, label="Clamped_exp  y", color='g')
plt.axhline(y1, ls='--', c='k', label="y1 and y2")
plt.axhline(y2, ls='--', c='k')
plt.title("Soft Clamping Function")
plt.xlabel("x")
plt.ylabel("y")
plt.legend()
plt.grid(True)
plt.ylim(0.0,5.0)

# Plot dy and its clamped derivative
plt.subplot(1, 2, 2)
plt.plot(x, dy, label=r"$dy/dx = -\frac{6}{(x-2)^7}$", color='blue', alpha=0.5)
plt.plot(x, dy_clamped, label="Clamped_exp dy/dx", color='red')
#plt.plot(x, dy_clamped_exp, label="Clamped_exp dy/dx", color='g')
plt.axhline(0, ls='--', c='k')  # Reference line for zero derivative
plt.title("Derivative of Soft Clamped Function")
plt.xlabel("x")
plt.ylabel("dy/dx")
plt.legend()
plt.grid(True)
plt.ylim(-15.0,15.0)

plt.tight_layout()
plt.show()

download

@NikoOinonen
Copy link
Collaborator Author

We talked in the meeting that I should add the AIMS density to the pyridine example to demonstrate the problem and the fix. However, the artifacting that I observed in my tests does not really manifest itself noticeably with pyridine since the heaviest element there is nitrogen which does not have such high core density.

In my test I observed the effect at least with Cl-atoms, so some molecule with that or a heavier element would work better. Any suggestions which molecule we want? If not, then I will just pick something random.

@ondrejkrejci @mondracek @ProkopHapala

@ProkopHapala
Copy link
Collaborator

that or a heavier element would work better. Any suggestions which molecule we want? If not, then I will just pick something random.

What was quite nice you were testing GUI with some benzene substituted by -Cl, -Br groups if I remember. That could be quite nice. Maybe even more nice would be pyridine substituted by these groups halogen groups.

But in general pick whatever you you find convenient.

@NikoOinonen
Copy link
Collaborator Author

What was quite nice you were testing GUI with some benzene substituted by -Cl, -Br groups if I remember.

Yes, 1-bromo-3,5-dichlorobenzene. We actually already have it in the examples as an xyz file: https://github.com/Probe-Particle/ppafm/tree/main/examples/benzeneBrCl2.

Maybe even more nice would be pyridine substituted by these groups halogen groups.

@ProkopHapala Something like this: https://pubchem.ncbi.nlm.nih.gov/compound/606256?

@ProkopHapala
Copy link
Collaborator

Yes

@mondracek
Copy link
Collaborator

mondracek commented Nov 12, 2024

@NikoOinonen did you test this PR in the CLI version? Did it work? I tried to run FDBM for some FHI-AIMS-generated XSF files and I keep hitting into the #312 issue. (I had to do some manual editing of the XSF files to overcome the problem that our code does not really understand these XSF from FHI-AIMS in their original form, but that does not seem to be the origin of the issue.)

@NikoOinonen
Copy link
Collaborator Author

@mondracek I was just now trying to get it to work in the CLI, but I run into the exact same issue. @yakutovicha Do you think you can finish #316 soon?

@NikoOinonen
Copy link
Collaborator Author

I added the example, but only for the GPU version until the CLI version issue gets fixed.

@ProkopHapala Can you add this file to the Zenodo record for the examples when you create it: https://www.dropbox.com/scl/fi/ilx7oe6iax375tqdleuax/BrClPyridine_hartree_density.tar.gz?rlkey=5j5xyc9n36zgw66xsb5mq8rvd&st=wyenjh4n&dl=0

@NikoOinonen
Copy link
Collaborator Author

@mondracek Regarding using densities from aims, I just used the cube output and then converted it to xsf.

I used the following script for the conversion (not sure if the origin is handled correctly, so better to keep it at (0, 0, 0)):

from pathlib import Path
from ase.io.cube import read_cube
from ppafm.io import saveXSF, primcoords2Xsf, Hartree2eV
import numpy as np

def cube_to_xsf(cube_path, xsf_path, scale=1.0):

    with open(cube_path, "r") as f:
        cube = read_cube(f, read_data=True)
        data = cube['data'].T * scale
        atoms = cube['atoms']
        origin = cube['origin']

    lattice = atoms.cell
    xyzs = atoms.get_positions() - origin
    Zs = atoms.get_atomic_numbers()
    lvec = np.concatenate([origin[None], lattice])

    xsf_path.parent.mkdir(exist_ok=True)
    atomstring = primcoords2Xsf(Zs, xyzs.T, lvec)
    saveXSF(xsf_path, data=data, lvec=lvec, head=atomstring)

if __name__ == "__main__":

    input_dir = Path(".")
    output_dir = Path(".")

    cube_to_xsf(input_dir / 'cube_001_hartree_potential.cube', output_dir / 'hartree.xsf', scale=Hartree2eV)
    cube_to_xsf(input_dir / 'cube_002_total_density.cube', output_dir / 'density.xsf', scale=1.0)

@ondrejkrejci
Copy link
Collaborator

@mondracek Regarding using densities from aims, I just used the cube output and then converted it to xsf.

I used the following script for the conversion (not sure if the origin is handled correctly, so better to keep it at (0, 0, 0)):

from pathlib import Path
from ase.io.cube import read_cube
from ppafm.io import saveXSF, primcoords2Xsf, Hartree2eV
import numpy as np

def cube_to_xsf(cube_path, xsf_path, scale=1.0):

    with open(cube_path, "r") as f:
        cube = read_cube(f, read_data=True)
        data = cube['data'].T * scale
        atoms = cube['atoms']
        origin = cube['origin']

    lattice = atoms.cell
    xyzs = atoms.get_positions() - origin
    Zs = atoms.get_atomic_numbers()
    lvec = np.concatenate([origin[None], lattice])

    xsf_path.parent.mkdir(exist_ok=True)
    atomstring = primcoords2Xsf(Zs, xyzs.T, lvec)
    saveXSF(xsf_path, data=data, lvec=lvec, head=atomstring)

if __name__ == "__main__":

    input_dir = Path(".")
    output_dir = Path(".")

    cube_to_xsf(input_dir / 'cube_001_hartree_potential.cube', output_dir / 'hartree.xsf', scale=Hartree2eV)
    cube_to_xsf(input_dir / 'cube_002_total_density.cube', output_dir / 'density.xsf', scale=1.0)

This script should be in our utilities.

@NikoOinonen
Copy link
Collaborator Author

This script should be in our utilities.

I'm not sure about that. It's quite specific to this case when you are computing exactly the hartree potential and the density using aims. And again the origin is handled somehow weirdly: I had a cube with non-zero origin and after conversion it was displaying right in VESTA but was not working in ppafm. I think the real solution is to fix #190 so we don't have to make any conversions.

@ondrejkrejci
Copy link
Collaborator

ondrejkrejci commented Nov 12, 2024 via email

@NikoOinonen
Copy link
Collaborator Author

@mondracek I added the CLI run script, so you can try it.

It's a little bit different from the GPU Python script, because it uses the dz2 tip for electrostatics instead of the delta density which had the wrong grid dimensions. I tried the same method for subtracting the valence electrons as in the pyridine example, but for some reason it was giving me really severe artefacts from the electrostatics so I did it this way. It's not ideal, but it demonstrates the effect of the cutoff.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Fixing FHI-aims electron densities for FDBM
4 participants