Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enh integration #27

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
91 changes: 74 additions & 17 deletions xpdView/azimuthal.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,71 @@
'''
This file will handle the pyFai integration for quick azimuthal integration of the data
'''
import os

from tifffile import imread

import pyFAI
import pyFAI.calibrant


class Azimuthal(object):
# init at top to save time
ai = pyFAI.AzimuthalIntegrator()


def _npt_cal(config_dict):
""" config_dict should be a PyFAI calibration dict """
try:
x_0, y_0 = (config_dict['centerX'], config_dict['centerY'])
except KeyError:
print("Not a pyFAI calibration dictionary, default "
"beam center will be set")
x_0, y_0 = (1024, 1024)
dist = np.sqrt((2048-x_0)**2 + (2048-y_0)**2)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use np.hypot.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also doesn't pyFAI know the shapes of the various detectors? Why not pull the detector shape from that rather than having magic numbers?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also also, does this mean that if your beam is incident on the 2048th x 2048th pixel that the dist and/or number of points in your XRD pattern is zero?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

About detector pixel range:
Because this is XPD workflow and during the calibration step, Perkin detector class has bee used. So it would have gone wrong if this detector is not 2048 by 2048. In terms programming, I agree should use Perkin detector class instead of hardwired numbers

About 2048, 2048 edge:
Correct, that is a bug, need to consider four corners

Copy link

@CJ-Wright CJ-Wright Aug 23, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, shouldn't you use whatever detector is in the pyFAI instance? I imagine that XPD will get detectors other than just the Perkin Elmer at some point.
Also this code might be useful/applicable to any x-ray scattering/diffraction beamline and they might not all use PE detectors

return dist


def xpdView_integrate(input_dir):
""" function to operate on file-based """
tif_list = [os.path.join(input_dir, f) for f in os.listdir(input_dir)
if os.path.splitext(f)[1] == '.tif']
# calibration dict
cfg_list = []
cfg_inuse = None # init

for tif in tif_list:
stem, ext = os.path.splitext(tif)
cfg_name = os.path.join(input_dir, stem + '.yaml')
if os.path.isfile(cfg_name):
with open(cfg_name, 'r') as f:
cfg_inuse = yaml.load(f)
cfg_list.append(cfg_inuse)

while any(x is None for x in cfg_list):
ind = cfg_list.index(None)
cfg_list.remove(None)
cfg_list.insert(ind, cfg_name)
print("WARNING: calibration paramters corresponding to {} is "
"missing, use alternative set of paramters from {}\n"
"integration results might be incorrect"
.format(tif_list[ind], cfg_name))
# use the last one, user's liability

# integration
if len(cfg_list) == len(tif_list):
for i in range(len(tif_list)):
w_name = tif_list[i].replace('.tif', '.chi')
cfg_dict = cfg_list[i]
npt = _npt_cal(cfg_dict)
img = imread(os.path.join(input_dir, tif_list[i]))
ai.setPyFAI(**config_dict)
integration_dict = {'filename':w_name,
'polarization_factor': 0.99}
rv = ai.integrate1d(img, npt, **integration_dict)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this mean that this doesn't support masks in the integration?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would be something that @chiahaoliu would have to answer. I am assuming that we could support masks. The purpose of this eventual pull request is to allow for configuration files (which I believe have information about the masks in them) to be incorporated in the integration process. The current Azimuthal class is just meant to use a simple version of pyFAI to allow for quick azimuthal integration of the data. That way users can get a general feel for what their integrated patterns look like.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, ok. What is the structure of these "configuration files"?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. Meaning I have no idea

Copy link

@CJ-Wright CJ-Wright Aug 22, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there is to be a "configuration file" it would be good to know what is in it so that the code could be built around it. Although for the record I am a bit skeptical of using a configuration file to manage the masks and other information.

Copy link
Author

@chiahaoliu chiahaoliu Aug 23, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mask is another optional argument in integrate1d fucntion which takes a numy array. Please refer to pyFAI docstring. So far we don't have consistent way of masking and we haven't decided which way to implement yet.

I remeber you have a auto-mask function, maybe we can integrate it into this workflow

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am aware of the function signature. My point was that you didn't leave any room for mask or any other kwargs to be passed to the pyFAI function, limiting its overall functionality and extensionability. Also please note that you hard-coded the polarization factor into the integration. You might consider leaving these as kwargs in your function with the desired defaults.

You are more than welcome to use my masking function. My version of the complete workflow is here and you are more than welcome to use it. I should be adding analysisstore capabilities to it soon.

else:
raise ValueError("Number of images is not equal to number "
"of calibration parameter sets")

class Azimuthal:
"""
This class handles all of the azimuthal integration of the data so that it can be done automatically as long
as the data comes in as a list of 2D numpy arrays
Expand Down Expand Up @@ -58,14 +117,18 @@ def get_right_names(self, file_names, data_list):

Returns
-------
None

file_data_dict : dict
a dictionary that mapps file name to 2D numpy array
"""
for file in file_names:
if file[-4:] == '.tif':
self.file_names.append(file[:-4] + '.chi')

file_data_dict = {}
for f in file_names:
stem, ext = os.path.splitext(f)
if ext == '.tif':
self.file_names.append(stem + '.chi')
else:
self.file_names.append(file)
#FIXME: need to thinkg if should leave it unchanged
self.file_names.append(f)

self.integration_time(data_list)

Expand All @@ -83,17 +146,11 @@ def integration_time(self, data_list):
None

"""
# Insert pyFAI calibration parameters, can be changed if better detector parameters found
det = pyFAI.detectors.Perkin()
wl = 0.184320e-10
ni = pyFAI.calibrant.ALL_CALIBRANTS("Ni")
ni.set_wavelength(wl)
poni1 = .1006793 * 2
poni2 = .1000774 * 2
ai = pyFAI.AzimuthalIntegrator(dist=0.2418217, poni1=poni1, poni2=poni2, rot1=0, rot2=0, detector=det)
ai.set_wavelength(wl)


for data in data_list:
ai.setPyFAI(**config_dict)
npt = _npt_cal(config_dict)
x, y = ai.integrate1d(data, 1000, unit='q_A^-1')
self.x_lists.append(x)
self.y_lists.append(y)
Expand Down