SANS data reduction#
This notebook will guide you through the data reduction for the SANS experiment that you simulated with McStas yesterday.
The following is a basic outline of what this notebook will cover:
Loading the NeXus files that contain the data
Inspect/visualize the data contents
How to convert the raw
time-of-flight
coordinate to something more useful (\(\lambda\), \(Q\), …)Normalize to a flat field run
Write the results to file
import numpy as np
import scipp as sc
import plopp as pp
import sans_utils as utils
Process the run with a sample#
Load the NeXus file data#
folder = "../3-mcstas/SANS_with_sample_many_neutrons"
⚠️ If you did not complete the SANS with sample simulation yesterday, you can use some pre-prepared data by uncommenting and running the cell below:
# folder = utils.fetch_data("3-mcstas/SANS_with_sample_many_neutrons")
sample = utils.load_sans(folder)
The first way to inspect the data is to view the HTML representation of the loaded object.
Try to explore what is inside the data, and familiarize yourself with the different sections (Dimensions
, Coordinates
, Data
).
sample
- event: 21705118
- position(event)vector3m[0. 0.3347558 3. ], [0. 0.33357999 3. ], ..., [0. 0.35114764 3. ], [0. 0.21073097 3. ]
Values:
array([[0. , 0.3347558 , 3. ], [0. , 0.33357999, 3. ], [0. , 0.33248711, 3. ], ..., [0. , 0.34958959, 3. ], [0. , 0.35114764, 3. ], [0. , 0.21073097, 3. ]], shape=(21705118, 3)) - sample_position()vector3m[0. 0. 0.]
Values:
array([0., 0., 0.]) - source_position()vector3m[ 0. 0. -150.]
Values:
array([ 0., 0., -150.]) - tof(event)float64ms244.088, 244.090, ..., 215.347, 228.446
Values:
array([244.08760966, 244.08976403, 244.09328672, ..., 215.33857787, 215.34673992, 228.44555599], shape=(21705118,)) - x(event)float64m0.0, 0.0, ..., 0.0, 0.0
Values:
array([0., 0., 0., ..., 0., 0., 0.], shape=(21705118,)) - y(event)float64m0.335, 0.334, ..., 0.351, 0.211
Values:
array([0.3347558 , 0.33357999, 0.33248711, ..., 0.34958959, 0.35114764, 0.21073097], shape=(21705118,))
- (event)float64counts5.740e-16, 5.074e-20, ..., 6.139e-13, 2.515e-22σ = 5.740e-16, 5.074e-20, ..., 6.139e-13, 2.515e-22
Values:
array([5.74048068e-16, 5.07441800e-20, 4.48563795e-24, ..., 6.23138666e-09, 6.13914476e-13, 2.51454290e-22], shape=(21705118,))
Variances (σ²):
array([3.29531184e-31, 2.57497180e-39, 2.01209478e-47, ..., 3.88301797e-17, 3.76890984e-25, 6.32292600e-44], shape=(21705118,))
Visualize the data#
Here is a 2D visualization of the neutron counts, histogrammed along the tof
and y
dimensions:
sample.hist(tof=200, y=200).plot(norm="log", vmin=1.0e-2)
Histogramming along y
only gives a 1D plot:
sample.hist(y=200).plot(norm="log")
Coordinate transformations#
The first step in the data reduction is to convert the raw event coordinates (position, time-of-flight) to something physically meaningful such as wavelength (\(\lambda\)) or momentum transfer (\(Q\)).
Scipp has a dedicated method for this called transform_coords
(see docs here).
We begin with a standard graph which describes how to compute e.g. the wavelength from the other coordinates in the raw data.
from scippneutron.conversion.graph.beamline import beamline
from scippneutron.conversion.graph.tof import kinematic
graph = {**beamline(scatter=True), **kinematic("tof")}
sc.show_graph(graph, simplified=True)
To compute the wavelength of all the events, we simply call transform_coords
on our loaded data,
requesting the name of the coordinate we want in the output ("wavelength"
),
as well as providing it the graph to be used to compute it (i.e. the one we defined just above).
This yields
sample_wav = sample.transform_coords("wavelength", graph=graph)
sample_wav
- event: 21705118
- L1()float64m150.0
Values:
array(150.) - L2(event)float64m3.019, 3.018, ..., 3.020, 3.007
Values:
array([3.01861913, 3.01848896, 3.01836838, ..., 3.02030013, 3.02048087, 3.00739215], shape=(21705118,)) - Ltotal(event)float64m153.019, 153.018, ..., 153.020, 153.007
Values:
array([153.01861913, 153.01848896, 153.01836838, ..., 153.02030013, 153.02048087, 153.00739215], shape=(21705118,)) - incident_beam()vector3m[ 0. 0. 150.]
Values:
array([ 0., 0., 150.]) - position(event)vector3m[0. 0.3347558 3. ], [0. 0.33357999 3. ], ..., [0. 0.35114764 3. ], [0. 0.21073097 3. ]
Values:
array([[0. , 0.3347558 , 3. ], [0. , 0.33357999, 3. ], [0. , 0.33248711, 3. ], ..., [0. , 0.34958959, 3. ], [0. , 0.35114764, 3. ], [0. , 0.21073097, 3. ]], shape=(21705118, 3)) - sample_position()vector3m[0. 0. 0.]
Values:
array([0., 0., 0.]) - scattered_beam(event)vector3m[0. 0.3347558 3. ], [0. 0.33357999 3. ], ..., [0. 0.35114764 3. ], [0. 0.21073097 3. ]
Values:
array([[0. , 0.3347558 , 3. ], [0. , 0.33357999, 3. ], [0. , 0.33248711, 3. ], ..., [0. , 0.34958959, 3. ], [0. , 0.35114764, 3. ], [0. , 0.21073097, 3. ]], shape=(21705118, 3)) - source_position()vector3m[ 0. 0. -150.]
Values:
array([ 0., 0., -150.]) - tof(event)float64ms244.088, 244.090, ..., 215.347, 228.446
Values:
array([244.08760966, 244.08976403, 244.09328672, ..., 215.33857787, 215.34673992, 228.44555599], shape=(21705118,)) - wavelength(event)float64Å6.310, 6.311, ..., 5.567, 5.907
Values:
array([6.31046659, 6.31052766, 6.31062371, ..., 5.56714852, 5.56735295, 5.90650148], shape=(21705118,)) - x(event)float64m0.0, 0.0, ..., 0.0, 0.0
Values:
array([0., 0., 0., ..., 0., 0., 0.], shape=(21705118,)) - y(event)float64m0.335, 0.334, ..., 0.351, 0.211
Values:
array([0.3347558 , 0.33357999, 0.33248711, ..., 0.34958959, 0.35114764, 0.21073097], shape=(21705118,))
- (event)float64counts5.740e-16, 5.074e-20, ..., 6.139e-13, 2.515e-22σ = 5.740e-16, 5.074e-20, ..., 6.139e-13, 2.515e-22
Values:
array([5.74048068e-16, 5.07441800e-20, 4.48563795e-24, ..., 6.23138666e-09, 6.13914476e-13, 2.51454290e-22], shape=(21705118,))
Variances (σ²):
array([3.29531184e-31, 2.57497180e-39, 2.01209478e-47, ..., 3.88301797e-17, 3.76890984e-25, 6.32292600e-44], shape=(21705118,))
The result has a wavelength
coordinate. We can also plot the result:
sample_wav.hist(wavelength=200).plot()
We can see that the range of observed wavelengths agrees with the range set in the McStas model (5.25 - 6.75 Å)
Exercise 1: convert raw data to \(Q\)#
Instead of wavelength as in the example above, the task is now to convert the raw data to momentum-transfer \(Q\).
The transformation graph is missing the computation for \(Q\) so you will have to add it in yourself. As a reminder, \(Q\) is computed as follows
You have to:
create a function that computes \(Q\)
add it to the graph
call
transform_coords
using the new graph
Note that the graph already contains the necessary components to compute the scattering angle \(2 \theta\) (called two_theta
in code).
Solution:
Histogram the data in \(Q\)#
The final step in processing the sample run is to histogram the data into \(Q\) bins.
sample_h = sample_q.hist(Q=200)
sample_h.plot(norm="log", vmin=1)
Exercise 2: process flat-field run#
Repeat the step carried out above for the run that contained no sample (also known as a “flat-field” run).
folder = "../3-mcstas/SANS_without_sample_many_neutrons"
⚠️ If you did not complete the SANS without sample simulation yesterday, you can use some pre-prepared data by uncommenting and running the cell below:
# folder = utils.fetch_data("3-mcstas/SANS_without_sample_many_neutrons")
Solution:
Bonus question: can you explain why the counts in the flat-field data drop at high \(Q\)?
Exercise 3: Normalize the sample run#
The flat-field run is giving a measure of the efficiency of each detector pixel. This efficiency now needs to be used to correct the counts in the sample run to yield a realistic \(I(Q)\) profile.
In particular, this should remove any unwanted artifacts in the data, such as the drop in counts around 0.03 Å-1 due to the air bubble inside the detector tube.
Normalizing is essentially just dividing the sample run by the flat field run.
Hint: you may run into an error like "Mismatch in coordinate 'Q'"
. Why is this happening? How can you get around it?
Solution:
normed.plot(norm="log", vmin=1.0e-3, vmax=10.0)
Save result to disk#
Finally, we need to save our results to disk, so that the reduced data can be forwarded to the next step in the pipeline (data analysis).
We will use a simple text file for this:
from scippneutron.io import save_xye
# The simple file format does not support bin-edge coordinates.
# So we convert to bin-centers first.
data = normed.copy()
data.coords["Q"] = sc.midpoints(data.coords["Q"])
save_xye("sans_iofq.dat", data)
Bonus exercise#
Re-run the reduction using the results from the simulations with less neutrons, and compare the results.
Solution: