SANS data reduction#
This notebook will guide you through the data reduction for the SANS experiment that you simulated with McStas yesterday.
The following is a basic outline of what this notebook will cover:
Loading the NeXus files that contain the data
Inspect/visualize the data contents
How to convert the raw
time-of-flight
coordinate to something more useful (\(\lambda\), \(Q\), …)Normalize to a flat field run
Write the results to file
import numpy as np
import scipp as sc
import plopp as pp
import utils
Process the run with a sample#
Load the NeXus file data#
folder = "../3-mcstas/SANS_with_sample_many_neutrons"
sample = utils.load_sans(folder)
The first way to inspect the data is to view the HTML representation of the loaded object.
Try to explore what is inside the data, and familiarize yourself with the different sections (Dimensions
, Coordinates
, Data
).
sample
- event: 21708006
- position(event)vector3m[0. 0.4844962 3. ], [0. 0.15091083 3. ], ..., [0. 0.30019053 3. ], [0. 0.01754391 3. ]
Values:
array([[0. , 0.4844962 , 3. ], [0. , 0.15091083, 3. ], [0. , 0.14993179, 3. ], ..., [0. , 0.07573909, 3. ], [0. , 0.30019053, 3. ], [0. , 0.01754391, 3. ]], shape=(21708006, 3)) - sample_position()vector3m[0. 0. 0.]
Values:
array([0., 0., 0.]) - source_position()vector3m[ 0. 0. -150.]
Values:
array([ 0., 0., -150.]) - tof(event)float64ms210.974, 230.292, ..., 238.274, 240.687
Values:
array([210.97376862, 230.29233854, 230.30080843, ..., 259.30751309, 238.27442387, 240.68740242], shape=(21708006,)) - x(event)float64m0.0, 0.0, ..., 0.0, 0.0
Values:
array([0., 0., 0., ..., 0., 0., 0.], shape=(21708006,)) - y(event)float64m0.484, 0.151, ..., 0.300, 0.018
Values:
array([0.4844962 , 0.15091083, 0.14993179, ..., 0.07573909, 0.30019053, 0.01754391], shape=(21708006,))
- (event)float64counts4.375e-17, 2.822e-09, ..., 2.832e-05, 0.001σ = 4.375e-17, 2.822e-09, ..., 2.832e-05, 0.001
Values:
array([4.37506971e-17, 2.82218272e-09, 2.58859099e-13, ..., 7.42546968e-71, 2.83231952e-05, 1.08524680e-03], shape=(21708006,))
Variances (σ²):
array([1.91412349e-033, 7.96471532e-018, 6.70080332e-026, ..., 5.51375999e-141, 8.02203386e-010, 1.17776062e-006], shape=(21708006,))
Visualize the data#
Here is a 2D visualization of the neutron counts, histogrammed along the tof
and y
dimensions:
sample.hist(tof=200, y=200).plot(norm="log", vmin=1.0e-2)
Histogramming along y
only gives a 1D plot:
sample.hist(y=200).plot(norm="log")
Coordinate transformations#
The first step in the data reduction is to convert the raw event coordinates (position, time-of-flight) to something physically meaningful such as wavelength (\(\lambda\)) or momentum transfer (\(Q\)).
Scipp has a dedicated method for this called transform_coords
(see docs here).
We begin with a standard graph which describes how to compute e.g. the wavelength from the other coordinates in the raw data.
from scippneutron.conversion.graph.beamline import beamline
from scippneutron.conversion.graph.tof import kinematic
graph = {**beamline(scatter=True), **kinematic("tof")}
sc.show_graph(graph, simplified=True)
To compute the wavelength of all the events, we simply call transform_coords
on our loaded data,
requesting the name of the coordinate we want in the output ("wavelength"
),
as well as providing it the graph to be used to compute it (i.e. the one we defined just above).
This yields
sample_wav = sample.transform_coords("wavelength", graph=graph)
sample_wav
- event: 21708006
- L1()float64m150.0
Values:
array(150.) - L2(event)float64m3.039, 3.004, ..., 3.015, 3.000
Values:
array([3.03887094, 3.00379328, 3.00374425, ..., 3.00095592, 3.01498165, 3.0000513 ], shape=(21708006,)) - Ltotal(event)float64m153.039, 153.004, ..., 153.015, 153.000
Values:
array([153.03887094, 153.00379328, 153.00374425, ..., 153.00095592, 153.01498165, 153.0000513 ], shape=(21708006,)) - incident_beam()vector3m[ 0. 0. 150.]
Values:
array([ 0., 0., 150.]) - position(event)vector3m[0. 0.4844962 3. ], [0. 0.15091083 3. ], ..., [0. 0.30019053 3. ], [0. 0.01754391 3. ]
Values:
array([[0. , 0.4844962 , 3. ], [0. , 0.15091083, 3. ], [0. , 0.14993179, 3. ], ..., [0. , 0.07573909, 3. ], [0. , 0.30019053, 3. ], [0. , 0.01754391, 3. ]], shape=(21708006, 3)) - sample_position()vector3m[0. 0. 0.]
Values:
array([0., 0., 0.]) - scattered_beam(event)vector3m[0. 0.4844962 3. ], [0. 0.15091083 3. ], ..., [0. 0.30019053 3. ], [0. 0.01754391 3. ]
Values:
array([[0. , 0.4844962 , 3. ], [0. , 0.15091083, 3. ], [0. , 0.14993179, 3. ], ..., [0. , 0.07573909, 3. ], [0. , 0.30019053, 3. ], [0. , 0.01754391, 3. ]], shape=(21708006, 3)) - source_position()vector3m[ 0. 0. -150.]
Values:
array([ 0., 0., -150.]) - tof(event)float64ms210.974, 230.292, ..., 238.274, 240.687
Values:
array([210.97376862, 230.29233854, 230.30080843, ..., 259.30751309, 238.27442387, 240.68740242], shape=(21708006,)) - wavelength(event)float64Å5.454, 5.954, ..., 6.160, 6.223
Values:
array([5.45364323, 5.95439043, 5.95461133, ..., 6.70472504, 6.16032308, 6.22331523], shape=(21708006,)) - x(event)float64m0.0, 0.0, ..., 0.0, 0.0
Values:
array([0., 0., 0., ..., 0., 0., 0.], shape=(21708006,)) - y(event)float64m0.484, 0.151, ..., 0.300, 0.018
Values:
array([0.4844962 , 0.15091083, 0.14993179, ..., 0.07573909, 0.30019053, 0.01754391], shape=(21708006,))
- (event)float64counts4.375e-17, 2.822e-09, ..., 2.832e-05, 0.001σ = 4.375e-17, 2.822e-09, ..., 2.832e-05, 0.001
Values:
array([4.37506971e-17, 2.82218272e-09, 2.58859099e-13, ..., 7.42546968e-71, 2.83231952e-05, 1.08524680e-03], shape=(21708006,))
Variances (σ²):
array([1.91412349e-033, 7.96471532e-018, 6.70080332e-026, ..., 5.51375999e-141, 8.02203386e-010, 1.17776062e-006], shape=(21708006,))
The result has a wavelength
coordinate. We can also plot the result:
sample_wav.hist(wavelength=200).plot()
We can see that the range of observed wavelengths agrees with the range set in the McStas model (5.25 - 6.75 Å)
Exercise 1: convert raw data to \(Q\)#
Instead of wavelength as in the example above, the task is now to convert the raw data to momentum-transfer \(Q\).
The transformation graph is missing the computation for \(Q\) so you will have to add it in yourself. As a reminder, \(Q\) is computed as follows
You have to:
create a function that computes \(Q\)
add it to the graph
call
transform_coords
using the new graph
Note that the graph already contains the necessary components to compute the scattering angle \(2 \theta\) (called two_theta
in code).
Solution:
Show code cell content
def compute_q(two_theta, wavelength):
return (4.0 * np.pi) * sc.sin(two_theta / 2) / wavelength
graph["Q"] = compute_q
# Show the updated graph
display(sc.show_graph(graph, simplified=True))
# Run the coordinate transformation
sample_q = sample.transform_coords("Q", graph=graph)
sample_q
- event: 21708006
- L1()float64m150.0
Values:
array(150.) - L2(event)float64m3.039, 3.004, ..., 3.015, 3.000
Values:
array([3.03887094, 3.00379328, 3.00374425, ..., 3.00095592, 3.01498165, 3.0000513 ], shape=(21708006,)) - Ltotal(event)float64m153.039, 153.004, ..., 153.015, 153.000
Values:
array([153.03887094, 153.00379328, 153.00374425, ..., 153.00095592, 153.01498165, 153.0000513 ], shape=(21708006,)) - Q(event)float641/Å0.184, 0.053, ..., 0.102, 0.006
Values:
array([0.18427419, 0.05303103, 0.05268568, ..., 0.02365342, 0.10167844, 0.00590415], shape=(21708006,)) - incident_beam()vector3m[ 0. 0. 150.]
Values:
array([ 0., 0., 150.]) - position(event)vector3m[0. 0.4844962 3. ], [0. 0.15091083 3. ], ..., [0. 0.30019053 3. ], [0. 0.01754391 3. ]
Values:
array([[0. , 0.4844962 , 3. ], [0. , 0.15091083, 3. ], [0. , 0.14993179, 3. ], ..., [0. , 0.07573909, 3. ], [0. , 0.30019053, 3. ], [0. , 0.01754391, 3. ]], shape=(21708006, 3)) - sample_position()vector3m[0. 0. 0.]
Values:
array([0., 0., 0.]) - scattered_beam(event)vector3m[0. 0.4844962 3. ], [0. 0.15091083 3. ], ..., [0. 0.30019053 3. ], [0. 0.01754391 3. ]
Values:
array([[0. , 0.4844962 , 3. ], [0. , 0.15091083, 3. ], [0. , 0.14993179, 3. ], ..., [0. , 0.07573909, 3. ], [0. , 0.30019053, 3. ], [0. , 0.01754391, 3. ]], shape=(21708006, 3)) - source_position()vector3m[ 0. 0. -150.]
Values:
array([ 0., 0., -150.]) - tof(event)float64ms210.974, 230.292, ..., 238.274, 240.687
Values:
array([210.97376862, 230.29233854, 230.30080843, ..., 259.30751309, 238.27442387, 240.68740242], shape=(21708006,)) - two_theta(event)float64rad0.160, 0.050, ..., 0.100, 0.006
Values:
array([0.16011624, 0.05026124, 0.04993572, ..., 0.025241 , 0.09973153, 0.0058479 ], shape=(21708006,)) - wavelength(event)float64Å5.454, 5.954, ..., 6.160, 6.223
Values:
array([5.45364323, 5.95439043, 5.95461133, ..., 6.70472504, 6.16032308, 6.22331523], shape=(21708006,)) - x(event)float64m0.0, 0.0, ..., 0.0, 0.0
Values:
array([0., 0., 0., ..., 0., 0., 0.], shape=(21708006,)) - y(event)float64m0.484, 0.151, ..., 0.300, 0.018
Values:
array([0.4844962 , 0.15091083, 0.14993179, ..., 0.07573909, 0.30019053, 0.01754391], shape=(21708006,))
- (event)float64counts4.375e-17, 2.822e-09, ..., 2.832e-05, 0.001σ = 4.375e-17, 2.822e-09, ..., 2.832e-05, 0.001
Values:
array([4.37506971e-17, 2.82218272e-09, 2.58859099e-13, ..., 7.42546968e-71, 2.83231952e-05, 1.08524680e-03], shape=(21708006,))
Variances (σ²):
array([1.91412349e-033, 7.96471532e-018, 6.70080332e-026, ..., 5.51375999e-141, 8.02203386e-010, 1.17776062e-006], shape=(21708006,))
Histogram the data in \(Q\)#
The final step in processing the sample run is to histogram the data into \(Q\) bins.
sample_h = sample_q.hist(Q=200)
sample_h.plot(norm="log", vmin=1)
Exercise 2: process flat-field run#
Repeat the step carried out above for the run that contained no sample (also known as a “flat-field” run).
Solution:
Show code cell content
folder = "../3-mcstas/SANS_without_sample_many_neutrons"
# Load file
flat = utils.load_sans(folder)
# Convert to Q
flat_q = flat.transform_coords("Q", graph=graph)
# Histogram
flat_h = flat_q.hist(Q=200)
flat_h.plot()
Bonus question: can you explain why the counts in the flat-field data drop at high \(Q\)?
Exercise 3: Normalize the sample run#
The flat-field run is giving a measure of the efficiency of each detector pixel. This efficiency now needs to be used to correct the counts in the sample run to yield a realistic \(I(Q)\) profile.
In particular, this should remove any unwanted artifacts in the data, such as the drop in counts around 0.03 Å-1 due to the air bubble inside the detector tube.
Normalizing is essentially just dividing the sample run by the flat field run.
Hint: you may run into an error like "Mismatch in coordinate 'Q'"
. Why is this happening? How can you get around it?
Solution:
Show code cell content
normed = sample_h / flat_h
---------------------------------------------------------------------------
DatasetError Traceback (most recent call last)
Cell In[12], line 1
----> 1 normed = sample_h / flat_h
DatasetError: Mismatch in coordinate 'Q' in operation 'divide':
(Q: 201) float64 [1/Å] [0.00451933, 0.00545244, ..., 0.190209, 0.191142]
vs
(Q: 201) float64 [1/Å] [0.00454958, 0.00548303, ..., 0.190305, 0.191239]
Show code cell content
# Make common bins, restricting the range to avoid NaNs
qbins = sc.linspace("Q", 5.0e-3, 0.19, 201, unit="1/angstrom")
# Histogram both sample and flat-field with same bins
sample_h = sample_q.hist(Q=qbins)
flat_h = flat_q.hist(Q=qbins)
# Normalize
normed = sample_h / flat_h
normed.plot(norm="log", vmin=1.0e-3, vmax=10.0)
Save result to disk#
Finally, we need to save our results to disk, so that the reduced data can be forwarded to the next step in the pipeline (data analysis).
We will use a simple text file for this:
from scippneutron.io import save_xye
# The simple file format does not support bin-edge coordinates.
# So we convert to bin-centers first.
data = normed.copy()
data.coords["Q"] = sc.midpoints(data.coords["Q"])
save_xye("sans_iofq.dat", data)
Bonus exercise#
Re-run the reduction using the results from the simulations with less neutrons, and compare the results.
Solution:
Show code cell source
cases = {"fewer_neutrons": "", "many_neutrons": "_many_neutrons"}
results = {"sample": {}, "normalized": {}}
vmin = {"sample": 1.0, "normalized": 5.0e-3}
for name, c in cases.items():
sample = utils.load_sans(f"../3-mcstas/SANS_with_sample{c}")
flat = utils.load_sans(f"../3-mcstas/SANS_without_sample{c}")
sample_q = sample.transform_coords("Q", graph=graph).hist(Q=qbins)
flat_q = flat.transform_coords("Q", graph=graph).hist(Q=qbins)
normed = sample_q / flat_q
results["normalized"][name] = normed
results["sample"][name] = sample_q
figs = [pp.plot(r, norm="log", vmin=vmin[key], title=key) for key, r in results.items()]
figs[0] + figs[1]