Ordinary least-squares fit

  1. (6 points) Ordinary least-squares fit by QR-decomposition
    1. Implement a function which makes a least-squares fit of a given data-set, {xi, yi, δyi | i=1...n}, with a linear combination of given functions, F(x) =∑ k=1..m ck fk(x).

      The parameters to the function should be the data to fit, and the set of functions the linear combination of which should fit the data. The function must return the vector of the coefficients, ck, and the covariance matrix, Σ.

      The fitting functions {fk(x)} can be programmed, e.g., as

      • a vector of functions (C++11,JavaScript,Python),
        std::vector<std::function<double(double)>> fitfunctions = {f1,...,fm};

      • a function of two parameters (C,Fortran,C++,Python),
        double fitfunctions(int i, double x){return fi(x);}
        For example:

        #include<math.h> // NAN is here
        double funs(int i, double x){
           switch(i){
           case 0: return 1.0; break;
           case 1: return x;   break;
           case 2: return x*x; break;
           default: {fprintf(stderr,"funs: wrong i:%d",i); return NAN;}
           }
        }
        
      • In the worst case you can simply precalculate the matrix Aik=fk(xi) and pass it as a parameter.
    2. Make up some interesting fits and check that the reported errors of the fitting coefficients are reasonable. For example,

  2. (3 points) Ordinary least-squares solution by thin singular-value decomposition

    1. Implement a function which makes a singular value decomposition of a (tall) matrix A by diagonalising ATA (with your implementation of the Jacobi eigenvalue algorithm), ATA=VDVT (where D is a diagonal matrix with eigenvalues of ATA and V is the matrix of the corresponding eigenvectors) and then using A=UΣVT where U=AVD-1/2 and Σ=D1/2.
    2. Implement a function which solves the linear least squares problem Ax=b using you implementation of singular-value decomposition.
    3. Implement a function which makes ordinary least-squares fit using your implementation of singular-value decomposition.
  3. (1 point)

    Make up two experimental points with error bars, {xi, yi ± Δ yi | i=1,2}. Assume that the theoretical curve is a straight line. Make an ordinary least-squares fit of your experimental data with a linear function y(x)=a+bx. Make a plot of your results. On the plot illustrate the probable distribution of the variable y as function of x (in the assumption of normal distribution of yi, a, b if you need that).

    Hint: the variation of y with respect to variations of a and b as function of x is given as
    δy = δa + δb x.
    Therefore
    δy2 = δa2 + δb2 x2 +2δa δb x.
    For normally distributed a and b this translates into
    Δy2(x) = Δa2 + Δb2x2 + 2<δa δb>x
    where <δa δb> is the covariance.

  4. (0 points)

    Make up an interesting exercise here.