Powder diffraction data reduction#
This notebook will guide you through the data reduction for the powder diffraction experiment that you simulated with McStas yesterday.
The following is a basic outline of what this notebook will cover:
Loading the NeXus files that contain the data
Inspect/visualize the data contents
How to convert the raw
time-of-flight
coordinate to something more useful (\(\lambda\), \(d\), …)TODO
import numpy as np
import scipp as sc
import plopp as pp
import utils
Process the run with a sample#
Load the NeXus file data#
folder = "../3-mcstas/powder_with_sample_1_pulse"
sample = utils.load_powder(folder)
The first way to inspect the data is to view the HTML representation of the loaded object.
Try to explore what is inside the data, and familiarize yourself with the different sections (Dimensions
, Coordinates
, Data
).
sample
Visualize the data#
Here is a 2D visualization of the neutron counts, histogrammed along the tof
and y
dimensions:
sample.hist(tof=200, y=200).plot(norm="log", vmin=1.0e-2)
Histogramming along y
only gives a 1D plot:
sample.hist(y=200).plot(norm="log")
Coordinate transformations#
The first step of this data reduction workflow is to convert the raw event coordinates (position, time-of-flight) to something physically meaningful such as wavelength (\(\lambda\)) or d-spacing (\(d\)).
Scipp has a dedicated method for this called transform_coords
(see docs here).
We begin with a standard graph which describes how to compute e.g. the wavelength from the other coordinates in the raw data.
from scippneutron.conversion.graph.beamline import beamline
from scippneutron.conversion.graph.tof import kinematic
graph = {**beamline(scatter=True), **kinematic("tof")}
sc.show_graph(graph, simplified=True)
To compute the wavelength of all the events, we simply call transform_coords
on our loaded data,
requesting the name of the coordinate we want in the output ("wavelength"
),
as well as providing it the graph to be used to compute it (i.e. the one we defined just above).
This yields
sample_wav = sample.transform_coords("wavelength", graph=graph)
sample_wav
The result has a wavelength
coordinate. We can also plot the result:
sample_wav.hist(wavelength=200).plot()
We can see that the range of observed wavelengths agrees with the range set in the McStas model (TODO Å)
Exercise 1: convert raw data to d-spacing#
Instead of wavelength as in the example above, the task is now to convert the raw data to interplanar lattice spacing \(d\).
The transformation graph is missing the computation for \(d\) so you will have to add it in yourself. As a reminder, \(d\) is computed as follows
You have to:
create a function that computes \(d\)
add it to the graph with name “dspacing”
call
transform_coords
using the new graph
Note that the graph already contains the necessary components to compute the scattering angle \(2 \theta\) (called two_theta
in code).
Solution:
Show code cell content
def compute_d(two_theta, wavelength):
return wavelength / (2 * sc.sin(two_theta / 2))
graph["dspacing"] = compute_d
# Show the updated graph
display(sc.show_graph(graph, simplified=True))
# Run the coordinate transformation
sample_d = sample.transform_coords("d", graph=graph)
sample_d
Histogram the data in d-spacing#
The final step in processing the sample run is to histogram the data into \(d\) bins.
sample_h = sample_d.hist(dspacing=200)
sample_h.plot(norm="log", vmin=1)
The histogrammed data currently has no standard deviations on the counts. This needs to be added after we have performed the histogramming operation.
When dealing with neutron events, we assume the data has a Poisson distribution. This means that the variance in a bin is equal to the counts in that bin (the standard deviation is then \(\sigma = \sqrt{\mathrm{counts}}\)).
We provide a helper function that will add Poisson variances to any given input:
utils.add_variances(sample_h)
sample_h.data
sample_h.plot(norm="log", vmin=1)
Exercise 2: TODO#
TODO:
normalise
…?
convert back to tof
Solution:
Save result to disk#
Finally, we need to save our results to disk, so that the reduced data can be forwarded to the next step in the pipeline (data analysis).
We will use a simple text file for this:
from scippneutron.io import save_xye
# This simple file format does not support bin-edge coordinates.
# So we convert to bin-centers first.
data = normed.copy()
data.coords["tof"] = sc.midpoints(data.coords["tof"])
save_xye("powder_reduced.xye", data)
Process data from 3 pulses#
We now want to repeat the reduction, but using more than a single pulse to improve our statistics.
We begin by loading the run with 3 pulses.
Save results to disk#
Once again, we need to save our results to disk:
data = folded_normed.copy()
data.coords["tof"] = sc.midpoints(data.coords["tof"])
save_xye("powder_reduced_3pulses.xye", data)