Implement a routine which makes a least-squares fit of a given data-set, {xi, yi, δyi | i=1...n}, with a linear combination of given functions, F(x) =∑ k=1..m ck fk(x).
The subroutine must calculate the vector of the coefficients, ck, and the covariance matrix. The fitting functions {fk(x)} can be programmed, e.g., as
a vector of functions (C++11,JavaScript,Python),
std::vector<std::function<double(double)>> fitfunctions =
{f1,...,fm};
a function of two parameters (C,Fortran,C++,Python),
double fitfunctions(int i, double x){return fi(x);}
For example:
#include<math.h> // NAN is here
double funs(int i, double x){
switch(i){
case 0: return 1.0; break;
case 1: return x; break;
case 2: return x*x; break;
default: {fprintf(stderr,"funs: wrong i:%d",i); return NAN;}
}
}
(3 points) Linear least-squares solution using thin singular-value decomposition.
Here under the thin singular value decomposition we shall understand a representation of a tall n×m (n>m) matrix A in the form A=UΣVT where U is an orthogonal n×m matrix (UTU=1), Σ is a square m×m diagonal matrix with non-negative real numbers on the diagonal (called singular values of matrix A), and V is a square m×m orthoginal matrix (VTV=1).
Singular value decomposition can be found by diagonalising the ATA matrix, ATA=VDVT, where D is a diagonal matrix with eigenvalues of ATA on the diagonal and V is the matrix of the corresponding eigenvectors. Indeed it is easy to check (do it) that the sought decomposition can the be constructed as A=UΣVT where Σ=D1/2, U=AVD-1/2.
Singular value decomposition can be used to solve the linear least squares problem Ax=b. Indeed inserting the decomposition into the equation gives UΣVTx=b. Multiplying from the left with UT and using the orthogonality of U one gets the projected equation ΣVTx=UTb. This is a square system which can be easily solved first by solving the diagonal system Σy=UTb for y and then obtaining x as x=Vy.
(1 point)
Make up two experimental points with error bars, {xi, yi ± Δ yi | i=1,2}. Assume that the theoretical curve is a straight line. Make an ordinary least-squares fit of your experimental data with a linear function y(x)=a+bx. Make a plot of your results. On the plot illustrate the probable distribution of the variable y as function of x (in the assumption of normal distribution of yi, a, b if you need that).
Hint: the variation of y with respect to variations of
a and b
as function of x is given as
δy = δa + δb x.
Therefore
δy2 = δa2 + δb2
x2 +2δa δb x.
For normally distributed a and b this translates into
Δy2(x) =
Δa2 +
Δb2x2 + 2<δa δb>x
where <δa δb> is the covariance.
Make up an interesting exercise here.