Implement a function which makes a least-squares fit of a given data-set, {xi, yi, δyi | i=1...n}, with a linear combination of given functions, F(x) =∑ k=1..m ck fk(x).
The parameters to the function should be the data to fit, and the set of functions the linear combination of which should fit the data. The function must return the vector of the coefficients, ck, and the covariance matrix, Σ.
The fitting functions {fk(x)} can be programmed, e.g., as
a vector of functions (C++11,JavaScript,Python),
std::vector<std::function<double(double)>> fitfunctions =
{f1,...,fm};
a function of two parameters (C,Fortran,C++,Python),
double fitfunctions(int i, double x){return fi(x);}
For example:
#include<math.h> // NAN is here
double funs(int i, double x){
switch(i){
case 0: return 1.0; break;
case 1: return x; break;
case 2: return x*x; break;
default: {fprintf(stderr,"funs: wrong i:%d",i); return NAN;}
}
}
Make up some interesting fits and check that the reported errors of the fitting coefficients are reasonable. For example,
(3 points) Ordinary least-squares solution by thin singular-value decomposition
(1 point)
Make up two experimental points with error bars, {xi, yi ± Δ yi | i=1,2}. Assume that the theoretical curve is a straight line. Make an ordinary least-squares fit of your experimental data with a linear function y(x)=a+bx. Make a plot of your results. On the plot illustrate the probable distribution of the variable y as function of x (in the assumption of normal distribution of yi, a, b if you need that).
Hint: the variation of y with respect to variations of
a and b
as function of x is given as
δy = δa + δb x.
Therefore
δy2 = δa2 + δb2
x2 +2δa δb x.
For normally distributed a and b this translates into
Δy2(x) =
Δa2 +
Δb2x2 + 2<δa δb>x
where <δa δb> is the covariance.
Make up an interesting exercise here.