Fitting QENS data#
Previously, some quasi-elastic neutron scattering (QENS) data has been simulated and reduced and can now be analysed with easyCore
.
Before the analysis can begin, it is necessary to load the experimental data and check that it looks reasonable.
The data can be loaded with np.loadtxt
as the data has been stored in a simple space-separated column file.
import numpy as np
qens_data = np.loadtxt('../4-reduction/qens_energy_transfer_unknown_quasi_elastic_3_pulse.dat', unpack=True)
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Cell In[1], line 3
1 import numpy as np
----> 3 qens_data = np.loadtxt('../4-reduction/qens_energy_transfer_unknown_quasi_elastic_3_pulse.dat', unpack=True)
File /opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/numpy/lib/npyio.py:1373, in loadtxt(fname, dtype, comments, delimiter, converters, skiprows, usecols, unpack, ndmin, encoding, max_rows, quotechar, like)
1370 if isinstance(delimiter, bytes):
1371 delimiter = delimiter.decode('latin1')
-> 1373 arr = _read(fname, dtype=dtype, comment=comment, delimiter=delimiter,
1374 converters=converters, skiplines=skiprows, usecols=usecols,
1375 unpack=unpack, ndmin=ndmin, encoding=encoding,
1376 max_rows=max_rows, quote=quotechar)
1378 return arr
File /opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/numpy/lib/npyio.py:992, in _read(fname, delimiter, comment, quote, imaginary_unit, usecols, skiplines, max_rows, converters, ndmin, unpack, dtype, encoding)
990 fname = os.fspath(fname)
991 if isinstance(fname, str):
--> 992 fh = np.lib._datasource.open(fname, 'rt', encoding=encoding)
993 if encoding is None:
994 encoding = getattr(fh, 'encoding', 'latin1')
File /opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/numpy/lib/_datasource.py:193, in open(path, mode, destpath, encoding, newline)
156 """
157 Open `path` with `mode` and return the file object.
158
(...)
189
190 """
192 ds = DataSource(destpath)
--> 193 return ds.open(path, mode, encoding=encoding, newline=newline)
File /opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/numpy/lib/_datasource.py:533, in DataSource.open(self, path, mode, encoding, newline)
530 return _file_openers[ext](found, mode=mode,
531 encoding=encoding, newline=newline)
532 else:
--> 533 raise FileNotFoundError(f"{path} not found.")
FileNotFoundError: ../4-reduction/qens_energy_transfer_unknown_quasi_elastic_3_pulse.dat not found.
With the data read in, we can produce a quick plot.
import matplotlib.pyplot as plt
plt.errorbar(*qens_data)
plt.xlabel('$\omega$/meV')
plt.ylabel('$I(\omega)$')
plt.show()
The QENS data covers a slightly broader range than we want to investigate. Therefore, we will crop the data so that we are only using points between -0.06 and 0.06 meV.
q, i, di = qens_data[:, np.where((qens_data[0] > -0.06) & (qens_data[0] < 0.06))[0]]
We now want to consider the mathematical model to be used in the analysis. QENS data is typically analysed with by fitting a Lorentzian function to the data, which has the functional form
where \(A\) is a scale factor, \(\tau\) is the half-width at half-maximum, and \(\omega_0\) is the centre offset.
Task
Write a function that implements Eqn. (5), called lorentzian
.
Click below to show code solution
Show code cell content
def lorentzian(omega: np.ndarray) -> np.ndarray:
"""
A Lorentzian function.
:omega: the energy transfer values to calculate over.
:return: intensity values.
"""
return A.raw_value / np.pi * tau.raw_value / ((omega - omega_0.raw_value) ** 2 + tau.raw_value ** 2)
The Lorentzian function is then typically convolved with a Gaussian distribution that is centred at zero with some known width, \(\sigma\). For this project, we will model \(\sigma\).
from scipy.stats import norm
def intensity(omega):
"""
The convolution of a Gaussian and the Lorentzian.
:omega: the energy transfer values to calculate over.
:return: intensity values.
"""
gauss = norm(0, sigma.raw_value).pdf(omega)
gauss /= gauss.sum()
return np.convolve(lorentzian(omega), gauss, 'same')
This means that there are a total of four parameters in our model.
Task
Create four Parameter
objects, for \(A\), \(\tau\), \(\omega_0\), and \(\sigma\).
Each should have an initial value and a uniform prior distribution, based on the values given in Table 2.
Parameter |
Initial Value |
Min |
Max |
---|---|---|---|
\(A\) |
10 |
1 |
100 |
\(\tau\) |
8.0 × 10-3 |
1.0 × 10-4 |
1.0 × 10-2 |
\(\omega_0\) |
1.0 × 10-3 |
0 |
2.0 × 10-3 |
\(\sigma\) |
1.0 × 10-3 |
1.0 × 10-5 |
1.0 × 10-1 |
Click below to show code solution
Show code cell content
from easyCore.Objects.Variable import Parameter
A = Parameter(name='A', value=10, fixed=False, min=0.01, max=100)
tau = Parameter(name='tau', value=8e-3, fixed=False, min=1e-4, max=10e-3)
omega_0 = Parameter(name='omega_0', value=0.001, fixed=False, min=0, max=0.002)
sigma = Parameter(name='sigma', value=0.001, fixed=False, min=1e-5, max=0.1)
It is now possible to compare our model at the initial estimates to the simulated data.
plt.errorbar(q, i, di, marker='.', ls='', color='k')
plt.plot(q, intensity(q), '-')
plt.xlabel('$\omega$/meV')
plt.ylabel('$I(\omega)$')
plt.show()
Task
Using easyCore
obtain maximum likelihood estimates for the four parameters of the model from comparison with the data.
Click below to show code solution
Show code cell content
from easyCore.Objects.ObjectClasses import BaseObj
from easyCore.Fitting.Fitting import Fitter
params = BaseObj(name='params', A=A, tau=tau, omega_0=omega_0, sigma=sigma)
f = Fitter(params, intensity)
res = f.fit(x=q, y=i, weights=1/di)
We can then plot the model and the data together as before and print the values of the parameters along with their uncertainties.
plt.errorbar(q, i, di, marker='.', ls='', color='k')
plt.plot(q, intensity(q), '-')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.show()
A.value
tau.value
omega_0.value
sigma.value