pyFAI
pyFAI copied to clipboard
Multigeometry integration issue for inclined detector geometries and radial distance
Hi,
I'm using the invert geometry capability in pyfai to move integrated patterns back into detector coordinates . That requires radial units. The inbuilt 2theta to r_m, unit conversion are not physically related to the detector. This makes sense since the unit conversion doesn't consider the detector geometry. My work around was just to do the integration in radial units; however, there appears to be a major bug for inclined multigeometries and integration using radial units. I observe the same issue in the 1d and 2d integrator.
With 2theta units I get the correct multi geometry integration
All things equal but with radial units yields
Can you provide some snippet or a working example ?
I zipped up a demo. It doesn't look like there is an example I can pull from documentation that would demonstrate the issue. problem_demo.zip
For what it is worth, the issue is not unique to multigeometry. Also, I just tried the geometry inversion and it worked. I wonder if it's not actually an issue and the resulting plot is just the r from a tilted detector (i.e. r_detector does not correspond to a single distance from the sample and is not expected to be similar to 2theta).
from pyFAI import load
id = 0
ai = load(poni_files[id])
print(ai)
res3 = ai.integrate2d(img_data[id], 3000,mask=mask_data[id],unit="2th_deg")
res4 = ai.integrate2d(img_data[id], 3000,mask=mask_data[id],unit="r_m")
jupyter.plot2d(res3)
pass
jupyter.plot2d(res4)
pass
Hi Daniel,
Thanks for the working example: this really looks like a bug, but I am not sure if it is possible to correct it.
You are perfectly right: the distance "r_m" makes sense only in the case of a detector mounted orthogonally to the transmitted beam and in your example the detector is tilted 25° or 33°.
The formula for r is sqrt(x*x+y*y)
so it does not take into account the distance to the origin...
https://github.com/silx-kit/pyFAI/blob/main/src/pyFAI/units.py#L200
What are you actually trying to achieve ? Maybe I can help you finding a work around. Cheers, Jerome
Thanks Jerome for getting back. The problem I'm dealing with is getting high quality Rietveld analysis of patterns collected with differently tilted multigeometry detectors (like in the example I sent). When the azimuthal coverage of a single detector is suboptimal, the PONI calibration is sometimes not ideal. One way to address this issue is using the methodology of MAUD Integration wherein the integrated intensities are represented in detector coordinates (instead of radial and azimuthal position) and the geometry can be refined in the Rietveld software. The tilted geometry detector in MAUD has a direct relationship to PONI. Below is an example of a pyFAI integration in detector coordinates and the geometry being refined underneath.
I've attempted to do the integration in a way that leverages pyfai's integration and then maps each element back to detector coordinates. Luca Lutterotti's methodology for integration is to bin position while binning intensity, which I haven't tried with pyFAI. Seems like both methods would give rise to small artifacts. My hacked together version using pyFAI's geometry inversion is below. I would definitely be curious if there is a better way to approach getting the detector coordinates of an integration element. You can see I'm not worried too much by the speed of the inversion if it can be saved and reused.
def cake2MAUD(ai, result, sigmas, fname_detector):
"""cake2MAUD converts histograms (e.g. 2theta vs intensity) to detector position vs intensity.
Args:
ai (object): Integrator object.
result (object): pyFAI 2d result integration objection
sigmas (np.array): Uncertainty in intensity.
fname_detector (str): Detector coordinates file name.
Returns:
intensity (np.array): Intensity.
X (np.array): X position on detector.
Y (np.array): Y position on detector.
sigmas (np.array): Uncertainty in intensity.
"""
def in_hull(p, hull):
"""
Test if points in `p` are in `hull`
`p` should be a `NxK` coordinates of `N` points in `K` dimensions
`hull` is either a scipy.spatial.Delaunay object or the `MxK` array of the
coordinates of `M` points in `K`dimensions for which Delaunay triangulation
will be computed
"""
from scipy.spatial import Delaunay
if not isinstance(hull,Delaunay):
hull = Delaunay(hull)
return hull.find_simplex(p)>=0
def export_interpolation(result,ai,fname):
"""Interpolate detector position and angles."""
print("Regenerating can take some time (usually 2-6 minutes per detector instance depending on detector size).")
# For each pixel center get the radius in meters and azimuthal angle in radians
r=ai.rArray()
c=ai.chiArray()
# Its important for a multigeometry the covers most of the azimuthal range to limit the azimuthal and radial range to that relevant to the detector
radial = result.radial
azimuthal = np.deg2rad(result.azimuthal)
radial_bin,chi_bin = np.meshgrid(radial,azimuthal)
mask = (radial_bin<np.min(r)) | (radial_bin>np.max(r)) | (chi_bin<np.min(c)) | (chi_bin>np.max(c) )
radial_bint = np.ma.masked_array(radial_bin, mask).compressed()
chi_bint = np.ma.masked_array(chi_bin, mask).compressed()
# Build hull and exclude points near the edge of the detector
shape=ai.get_shape()
x,y = np.meshgrid(np.linspace(1.5,shape[0]-1.5),np.linspace(1.5,shape[1]-1.5))
ai_hull = np.column_stack((y.ravel(),x.ravel()))
# Invert geometry and get the pixel coordinates
ig = InvertGeometry(r,c)
t = ig.many(radial_bint,chi_bint,refined=True)
t[~in_hull(t, ai_hull)]=np.nan
ty = np.empty(np.shape(radial_bin))
ty[:] = np.nan
ty[~mask] = t[:,0]
tx = np.empty(np.shape(radial_bin))
tx[:] = np.nan
tx[~mask] = t[:,1]
#t contain y,x. y in pyFAI is corresponds to poni1
Y_bin = 1e3*(ty*ai.pixel1-ai.poni1+ai.pixel1/2.0)
X_bin = 1e3*(tx*ai.pixel2-ai.poni2+ai.pixel2/2.0)
# Store so only a one time cost
with open(fname, 'wb') as f:
pickle.dump(result.radial, f)
pickle.dump(result.azimuthal, f)
pickle.dump(X_bin, f)
pickle.dump(Y_bin, f)
return X_bin, Y_bin
def load_interpolation(fname):
with open(fname, 'rb') as f:
radial_stored = pickle.load(f)
azimuthal_stored = pickle.load(f)
X_bin = pickle.load(f)
Y_bin = pickle.load(f)
return X_bin, Y_bin, radial_stored, azimuthal_stored
def get_bin_detector_location(result, ai, fname):
"""Compare key metrics to see if interpolation is good."""
if Path(fname).is_file():
X_bin, Y_bin, radial_stored, azimuthal_stored = load_interpolation(
fname)
if all(radial_stored == result.radial) and all(azimuthal_stored == result.azimuthal):
return X_bin, Y_bin
return export_interpolation(result, ai, fname)
# Using a cache for the detector/extract bin locations
X_bin,Y_bin = get_bin_detector_location(result, ai, fname = fname_detector)
# Create a bin level mask
imask = np.ma.masked_invalid(X_bin).mask | np.ma.masked_invalid(
Y_bin).mask | np.ma.masked_invalid(result.intensity).mask | (result.intensity == 0)
X_bin = np.ma.masked_array(X_bin, imask)
Y_bin = np.ma.masked_array(Y_bin, imask)
intensity = np.ma.masked_array(result.intensity, imask)
sigmas = np.ma.masked_array(sigmas, imask)
return intensity, X_bin, Y_bin, sigmas
Related issue: #1873 I will be off for a week, ...
Just one idea until then:
-
Define a virtual detector with a forced pixel size and position (i.e. 1m orthogonal mount from sample, 100um pixel size, centered)
-
Define the projection of each pixel onto this detector:
- y_v = y*z_v/z
- x_v = x*z_v/z
- z_v = 1
-
define user defined units (like in this tutorial:), register x_v and y_v as radial and azimuthal units (note that radial and azimuthal is just a naming convention)
-
integrate in 2D using those units to project the detector image onto the virtual detector. mind to enforce both ranges and number of points
-
Do the same for each detector position, but with the same output grid.
-
Sum the signal and normalization, separately (hidden in the IntegrationResult tuple) before doing the ratio and get the result you are looking for.
I came up with a solution using Scipy. There's likely lots of room for improvement. The example below works with the data files I sent above.
Setup basic integration
# Common packages
from pathlib import Path
import numpy as np
from copy import deepcopy
# Config/load files
import fabio
from pyFAI.multi_geometry import MultiGeometry
img_files = [f"Quad{i}_sum.edf" for i in range(0,4)]
img_data = [fabio.open(i).data for i in img_files]
mask_files = [f"Mask{i}.edf" for i in range(0,4)]
mask_data = [fabio.open(i).data for i in mask_files]
poni_files = [f"Quad{i}.poni" for i in range(0,4)]
# Setup the integrator
from pyFAI.gui import jupyter
from pyFAI import load
id = 1
ai = load(poni_files[id])
print(ai)
print("MAUD parameters")
print("-----------------------")
print("Distance",ai.get_dist()*1e3)
print("center x",ai.poni2*1e3)
print("center y",ai.poni1*1e3)
print("2theta",np.rad2deg(ai.rot1))
print("tilt/phiDA",-np.rad2deg(ai.rot2))
If you do the integration in MAUD, the tilt angles are the same (magnitude and sign) as in pyFAI but center_y = detector_size_y - PONI1. This is a result of MAUD/imagej using the upper left corner as the zero of image. When doing the integration in pyFAI and computing the detector coordinates one either needs to change the sign of the tilt or flip the detector coordinates and compute center_y.
# Integrate per usual
res1 = ai.integrate1d(img_data[id], 3000,mask=mask_data[id],unit="2th_deg",dummy=np.nan,method=("full", "histogram", "cython"))
res = ai.integrate2d(img_data[id], 3000,npt_azim=35,mask=mask_data[id],unit="2th_deg",dummy=np.nan,method=("full", "histogram", "cython"))
jupyter.plot1d(res1)
pass
jupyter.plot2d(res)
pass
Calculate detector XY coordinates using scipy
Linear interpolation resulted in too large of an error. Cubic does a reasonable job. The performance seems to change with how tilted the detectors are. We might consider a virtual super resolution detector to build the interpolation function with. Also need a proper way to evaluate XY error.
#TODO super grid to improve interpolation
# May be first use, so calculate chia and ttha
chi = np.rad2deg(ai.chiArray())
tth = np.rad2deg(ai.twoThetaArray())
# If detector wraps through the original put the zero at 180 to avoid bounds issues
chi_steps = np.unique(np.round(sorted(chi.ravel())))
azimuthal = res.azimuthal
azimuthal[azimuthal<0]+=360
chi[chi<0]+=360
if 360 in chi_steps or 0 in chi_steps:
chi[chi >= 180] = chi[chi >= 180]-360
azimuthal[azimuthal >= 180] = azimuthal[azimuthal >= 180]-360
shape = ai.get_shape()
X, Y = np.meshgrid(np.arange(0, shape[1]), np.arange(0, shape[0]))
# Build interpolator
# from scipy.interpolate import LinearNDInterpolator
from scipy.interpolate import CloughTocher2DInterpolator
fx = CloughTocher2DInterpolator(
list(zip(chi.ravel(), tth.ravel())), X.ravel())
fy = CloughTocher2DInterpolator(
list(zip(chi.ravel(), tth.ravel())), Y.ravel())
A simple validation is computing 2theta from the detector coordinates and comparing to the integration radial unit.
# Gridify integrated positions and interpolate X,Y for each integration bin.
# From the detector coordinates in pixels compute the tth and compare to radial
chi_bin, tth_bin = np.meshgrid(azimuthal,res.radial)
tth_xy = np.transpose(np.rad2deg(ai.tth(fy(chi_bin, tth_bin),fx(chi_bin, tth_bin))))
tth_ref = res.radial
# Compare and build mask for any bins that were not interpolated well enough
interp_mask = np.ma.masked_invalid(tth_xy).mask
max_err=1e-4
print("# before filter, # after filter, mean error")
for i,(cake,mask) in enumerate(zip(tth_xy,deepcopy(interp_mask))):
interp_mask[i] = mask | (np.absolute(tth_ref-cake)>max_err)
mean_err = np.nanmean(np.absolute(tth_ref[~interp_mask[i]]-cake[~interp_mask[i]]))
if ~np.isnan(mean_err):
max_err = np.nanmax(np.absolute(tth_ref[~interp_mask[i]]-cake[~interp_mask[i]]))
else:
max_err=np.nan
print(np.sum(~mask),np.sum(~interp_mask[i]),mean_err,max_err)
Most of the data points are interpolate with an error of 1e-4 or better.
# Export inclined detector integration to MAUD
def write_esg_detector(i_2dm, x_2dm, y_2dm, weight_2dm, chi_2d, mask, fname, distance):
"""Write data formatted for inclined detector geometry in MAUD.
Args:
i_2dm (numpy.array): Intensity for each histogram
x_2dm (numpy.array): X position for each histogram
y_2dm (numpy.array): Y position for each histogram.
weight_2dm (numpy.array): Uncertainty in intensityfor each histogram.
chi_2d (numpy.array): Chi position for each histogram in i_2dm, x_2dm, etc.
fname (str): Output file name.
distance (float): detector distance in mm.
"""
blockid = 0
f = open(fname, "w")
f.write("_pd_block_id noTitle|#%d\n" % (blockid))
f.write("\n")
f.write("_diffrn_detector 2D\n")
f.write("_diffrn_detector_type CCD like\n")
f.write("_pd_meas_step_count_time ?\n")
f.write("_diffrn_measurement_method diffraction_image\n")
f.write("_diffrn_measurement_distance_unit mm\n")
f.write("_pd_instr_dist_spec/detc %f\n" % (distance))
f.write("_diffrn_radiation_wavelength ?\n")
f.write("_diffrn_source_target ?\n")
f.write("_diffrn_source_power ?\n")
f.write("_diffrn_source_current ?\n")
f.write("_pd_meas_angle_omega 0.0\n")
f.write("_pd_meas_angle_chi 0.0\n")
f.write("_pd_meas_angle_phi 0.0\n")
f.write("_pd_meas_orientation_2theta 0\n")
f.write("_riet_par_spec_displac_x 0\n")
f.write("_riet_par_spec_displac_y 0\n")
f.write("_riet_par_spec_displac_z 0\n")
f.write("_riet_meas_datafile_calibrated false\n")
chi_2d[chi_2d < 0] += 360
index = np.argsort(chi_2d)
for i in index:
intensities = i_2dm[i]
imask = mask[i] | np.ma.masked_invalid(intensities).mask | np.ma.masked_invalid(x_2dm[i]).mask | np.ma.masked_invalid(
y_2dm[i]).mask
intensity_masked = np.ma.masked_array(
intensities, imask).compressed()
if len(intensity_masked) > 4:
xs = x_2dm[i]
ys = y_2dm[i]
xs = np.ma.masked_array(
xs, imask).compressed()
ys = np.ma.masked_array(
ys, imask).compressed()
weights = weight_2dm[i]
weights = np.ma.masked_array(
weights, imask).compressed()
f.write("_pd_block_id noTitle|#%d\n" % (blockid))
f.write("\n")
f.write("_pd_meas_angle_eta %f\n" % (chi_2d[i]))
f.write("\n")
f.write("loop_\n")
f.write(
"_pd_meas_position_x _pd_meas_position_y _pd_meas_intensity_total _pd_meas_intensity_sigma\n")
for x, y, intensity, weight in zip(xs, ys, intensity_masked, weights):
f.write("%f %f %f %f\n" % (x, y, intensity, weight))
f.write("\n")
blockid += 1
f.close()
write_esg_detector(res.intensity,X_bin,Y_bin,np.zeros(res.intensity.shape),res.azimuthal,interp_mask,f"test_detector{id}.esg",ai.get_dist()*1e3)
# Write standard integration to MAUD
def write_esg1(radial, intensities, azimuthal, sigmas, file):
"""Write MAUD esg formatted histogram data to single file."""
Path(file).unlink(missing_ok=True)
with open(file, 'a+') as f:
azimuthal[azimuthal < 0] += 360
for i, (intensity, azimuth, sigma) in enumerate(zip(intensities, azimuthal, sigmas)):
imask = np.ma.masked_invalid(intensity).mask
intensity_masked = np.ma.masked_array(
intensity, imask).compressed()
if len(intensity_masked) > 0:
radial_masked = np.ma.masked_array(radial, imask).compressed()
sigma_masked = np.ma.masked_array(sigma, imask).compressed()
header = f"\n_pd_block_id noTitle|#{i}\n" \
f"_pd_meas_angle_eta {azimuth}\n" \
f"_pd_meas_angle_omega {0.0}\n\n" \
f"loop_\n" \
f"_pd_meas_position_x _pd_meas_intensity_total _pd_proc_intensity_weight"
np.savetxt(f,
np.c_[radial_masked, intensity_masked, sigma_masked],
delimiter='\t',
header=header,
comments='')
write_esg1(res.radial, res.intensity,res.azimuthal, np.zeros(res.intensity.shape), f"test{id}.esg")
Result in MAUD
Imported using the x,y coordinates and eta (refinable titled geometry).
The calibration is good
Using traditional 2theta,eta import (only polynomial radial corrections are possible).
A similar quality calibration is achieved, which suggests the tilts were mostly correct from pyFAI for the detector case.