Pages

Detector Geometry from Powder Rings

The goal of diffractive imaging experiments is to measure a diffraction pattern on a detector with high precision and calculate the structure in the interaction region (the volume where the structure of interest and the beam intersect) that created the pattern. A critical component of scattering experiments is precise knowledge of the spatial location of pixels on the detector with respect to the interaction region; the interaction region is taken to be the origin of the scattering system. Small tilts or translations of the detector can have huge effects on the spatial location and solid-angle of individual pixels, particularly those close to edge of a flat-panel detector. While Bragg peaks and other features may still be resolved, without a precise and accurate knowledge of the geometry of the detector setup it will be impossible to reconstruct the sample structure because the spatial location of the Bragg peaks on the detector will be unknown.

The question is, how does one measure the spatial location of every pixel on a detector given the limitations of mechanical measurement? We cannot just use mechanical measurements since, ideally, we would like to be able to modify our detector setup to meet different experimental needs without spending hours doing gymnastics with calipers. Additionally, we must consider that individual pixels on the detector will usually have spatial dimensions on the order of ~100 microns and we simply cannot reliably measure these features without optical methods.

One solution is to take a sample with a structure that will produce a known or easily calculated diffraction pattern on all the detectors of interest. One can then compare the predicted diffraction pattern with the measured one and either adjust the detectors to produce the predicted pattern (if the error is very large) or apply mathematical corrections to the model that produced the predicted pattern until it mimics the measured pattern (if the error is small): The sum of these corrections will reveal precisely how the detector is misaligned which tells us exactly how it must be oriented in space.

Note: An assumption we have made is that the in-plane geometry of any single detector, i.e. the coordinates of every pixel on the plane parallel to the detector surface, is known precisely and that we need only track the coordinates and angles or orientation of the detector. This is often a valid assumption since high-resolution detectors always come with geometry specifications from the manufacturer. The location of a detector with respect to other detectors, however, is completely subject to the imagination of the experimenters.

An example of such a method is to take a measurable macroscopic object, such as an ion-milled silicon chip, and place it between the interaction region and the detector of interest. Experimental simulation of a radiating point source at the interaction region will illuminate the silicon chip and, where sections of the chip have been cut through with the ion mill, cast a shadow of the chip on the detector. A precise knowledge of the geometry of the macroscopic object and its orientation with respect to the simulated point source allows the calculation of a predicted shadow. Comparison with the measured shadow will allow a calculation of the orientation and location of the detector. Even measuring the separation of just two points on the projected shadow, with knowledge of the real distance between those corresponding points on the object, allows the valuable calculation of the distance of the detector plane from the interaction region from simple trigonometry. Sometimes, knowledge of the detector distance is all that is needed if detector tilts are clearly small. To get a more accurate knowledge of pixel locations, however, a more sophisticated model of the macroscopic object (such as a 3D scan or computed tomography) is required to construct an accurate model of the shadow and restrict all detector degrees of freedom.

Note: Any physics student will recognize that a perfect, radiating point source is a non-physical phenomena. In a real experiment like this one, light is usually emitted from the entire interaction region. This particular experiment used fluorescent light from an iron salt dissolved in a jetted spray as our "point source" in calibration runs. An x-ray laser beam passes through the jet and causes the iron in the jet solution to fluoresce by exciting core electrons. This light is emitted from every location in space where the beam touches the jet: Since the beam has transverse dimensions on the order of a couple microns, the interaction region is small but certainly not point-like (the transverse dimensions of the jet are on the order of 10-100 microns, much larger than the x-ray beam).

Another common method for producing a predictable pattern on the detector is to use powder diffraction. A powder of a crystalline substance with known scattering angles will produce rings of diffracted light due the random orientation of the crystal planes in the powder with respect to the incident radiation. These rings, neglecting polarization effects, are circularly-symmetric about the axis of the beam and their location on the Ewald sphere is easily calculated. A simulated image of such powder rings for LaB6 powder on a perfect, planar detector (1024 * 1024 pixels) parallel to the beam-axis is shown below.


The measured powder diffraction rings on the same detector during an experiment will look something like the image below.


Note: We have not included, thus far, discussion of the effects of the polarization of the x-ray laser on the powder diffraction pattern or attempted to model this in our predicted diffraction pattern. The polarization effects cause the visible dimming of the rings in the center of the detector and, since this breaks the rotational symmetry, could provide an additional constraint on the degrees of freedom of the detector orientation. I will explain the dimming due to polarization and provide improvements to the geometry fitting based on this effect in future updates. For now, polarization of the incident beam serves simply as an explanation for why the powder rings are not visible across the center of the detector.

One can use a histogram across the detector to see the rings and adjust geometry parameters to make the simulated ring peaks correspond with the measured peaks in the histogram, as below, where the blue line is the measured signal and the green is the ideal, predicted signal across three different slices of the detector.


Though easily implemented, this is not a foolproof method since we cannot view the plot of every slice across the detector simultaneously and so are unlikely to arrive at the perfect fit. Additionally the background noise and diffuse scattering gradient (likely from the capillary that contained the LaB6 powder, which is made of an amorphous material) is not helpful. The image should be cleaned up using localized thresholding to remove the background noise and remove any diffuse scattering or fluorescence.


Overlaying the images in a GUI using something like PyQtGraph allows rapid, fine adjustment of the geometry parameters to fit the measured powder rings. Fitting by visual inspection alone has proven for my purposes sufficient to optimize the geometry with adjustments on the order of \(10^{-2}\) degrees and tens of microns.


To illustrate, here is an example of a horribly misaligned detector geometry and the discrepancy between the measured and predicted powder rings.


As you might imagine, the actual improvement in goodness of fit is difficult to evaluate for fine parameter adjustments. Below is a rough detector geometry followed by an improved geometry I produced with my PyQtGraph GUI application.



The improvement is barely perceptible but close inspection should show that the bottom image has been shifted very slightly and the overlapping region of the predicted and measured rings is closer to center compared to the previous image.

Perhaps a simple measure of the average brightness of the composite image could be a naive quantification of the goodness of fit (since the composite image brightens where the measured and predicted rings overlap). Likely, though, a more sophisticated method would be necessary to achieve a fit of any respectable precision. For the present, a manual fit by visual inspection seems good enough for our research purposes.

This post was inspired by my recent work at the Center for Free-Electron Laser Science, a section of the Deutsches Elektronen Synchrotron in Hamburg. I worked with Fabian Trost and Kartik Ayyer in analyzing this and other data from our group's experiments at the LINAC Coherent Light Source at the Stanford Linear Accelerator.