Examination projects

Please check your ordinal number in the srpeadsheet as I have just sorted people after the chosen examination date!

The algorithm to pick a project:

You have the right to choose instead one of the more complicated --- in my opinion --- voluntary project below. The above algorithm for picking your project applies also to voluntary projects.

Remember to create a readme.txt file with a description of what you were doing.

time now: 01/Apr/2026-20:19
next run: 02/Jul/2013-10:00

Ok, it is reshuffled for 02/Jul/2013-10:00 run: go ahead and good luck... :)

  1. Multi-dimensional pseudo-random adaptive recursive (non-stratified: simply subdivide in all dimensions) integrator in a rectangular volume.

    Do not waste points.

    Check how far in the number of dimensions you can go without overwhelming the computer (most probably running out of stack, but in any case up to 100M allocated memory and up to 5min nice run time).

  2. Two-dimensional integrator for integrals in the form
    abdx d(x)u(x)dy f(x,y)
    which consecutively applies your favourite adaptive one-dimensional integrator along each of the two dimensions. The signature might be something like

    double integ2D(double a, double b, double d(double x), double u(double x), double f(double x, double y), double acc, double eps)

  3. Quantum annealing (minimization).

  4. Inverse iteration algorithm for eigenvalues (and eigenvectors).

  5. One-dimensional adaptive integrator which at each iteration subdivides the interval not into two, but into three sub-intervals.

    Use a closed classical quadrature unless the integrand returns INFINITY in which case switch to an open quadrature.

    Compare with your other integrators.

  6. Multidimensional pseudo-random integrator -- that is, plain Monte Carlo: Q(N)=V⋅average({f(xi)}), ΔQ(N)=V/√N⋅variance({f(xi)}) --- with asymptotic error estimate:

    Check whether this algorithm gives any improvement over plain Monte Carlo.

    Try use the fitting function f(N)=c0+c1/√N+c2/N.

  7. Multidimensional quasi-random integrator with asymptotic error estimate:

    Try fitting more data-points; try using more term in the fitting function, say f(N)=c0+c1/N+c2/N2.

  8. Quasi-Newton's minimization with SR1 update.

  9. Implement a (sub-)spline of a data set {xi, yi| i=1,..,n; p1, pn} --- where the user also supplies the derivatives p1 and pn at the end-points x1, xn --- using the following algorithm:

    Check whether it makes smaller wiggles than quadratic and cubic splines.

    Now, if the user supplies NAN (from math.h) for one or both of the end-point derivatives, you have to estimate p1 and/or pn from the the same cubic interpolating polynomial that you used to determine correspondingly p2 and/or pn-1.

  10. Symmetric row/column update of a size-n symmetric eigenvalue problem.

    The matrix to diagonalize is given in the form

    A = D + e(p) uT + u e(p)T

    where D is a diagonal matrix with diagonal elements {dk, k=1,...,n}, u is a given column-vector, and the vector e(p) with components e(p)iip is a unit vector in the direction p where 1≤p≤n.

    Given the diagonal elements of the matrix D, the vector u, and the integer p, calculate the eigenvalues of the matrix A using O(n2) operations.

    Although this problem can be solved through two rank-1 updates (see the notes), you probably want to use the following method which should be simpler.

    Without loss of generality we can assume that the component up is equal zero (if it is not, we can make it so by simply redefining the diagonal element dp→dp+2up, up→0).

    The eigenvalue equation for the updated matrix reads

    (D - λ) x + e(p) uT x +u e(p)T x = 0,

    where x is an eigenvector and λ is the corresponding eigenvalue.

    The component number p of this equation reads

    (dp - λ) xp + uT x = 0,

    while the component number k≠p reads

    (dk - λ) xk + uk xp = 0.

    Dividing the last equation by (dk-λ), multiplying from the left with kuk, substituting uT x from the first equation and dividing by xp gives the sought secular equation,

    -(dp-λ) + ∑k uk2/(dk-λ) = 0.

    The roots of this equation give the updated eigenvalues (check the derivations thoroughly though, it is only an idea, I didn't try it myself).

    This equation is similar to the secular equation of rank-1 update (see the notes) and has similar properties.

  11. Symmetric rank-1 update of a size-n symmetric eigenvalue problem.

    The matrix A to diagonalize is given in the form

    A = D +uuT,

    where D is a diagonal matrix and u is a column-vector.

    So, given the diagonal elements of the matrix D and the elements of the vector u find the eigenvalues of the matrix A using only O(n2) operations (see section ''Eigenvalues of updated matrix'' in the book).

  12. Implement a subroutine that calculates natural logarithm of complex argument z via the integral

    ln(z) = 1z dt/t

    by directly calculating the integral of the complex-valued integrand of the complex argument along a straight line in the compex plan.

    Think, what happens if you integrate not along a straight line but rather make a circle (or two) around zero :)

  13. Multidimensional pseudo-random (plain Monte Carlo) vs quasi-random (Halton and/or lattice sequence) integrators: investigate the convergence rates (of some interesting integrals in different dimensions) as function of the number of sample points.

  14. Implement a subroutine that calculates the error function of complex argument z,

    erf(z) = 2/√π 0z exp(-t2)dt ,

    by directly calculating the integral of the complex-valued integrand of the complex argument along a straight line in the compex plan. You can use complex type (C, C++); complex<double> (C++); and python's module 'cmath'. Try different recursive adaptive integrators and pick the most effective one for the job. Investigate the realistic accuracy of this method of calculating the error function; the realistic region in the complex plane where you can reliably calculate the error function. Compare with the built-in error function from math.h.

  15. Akima sub-spline.

  16. Build a cubic (sub-)spline, Si(x)= yi +bi(x-xi) +ci(x-xi)2 +di(x-xi)3, for a data set, {xi, yi, y'i| i=1,..,n}, where the function is tabulated together with its derivative (see the subsection 'Akima sub-spline interpolation' for the inspiration) -- for example, as a result of a numerical integration of an ordinary second order differential equation. Now make the second derivative of the spline continuous by increasing the order of the spline to four, for example, in the form Si(x)= yi +bi(x-xi) +ci(x-xi)2 +di(x-xi)3 +ei(x-xi)2(x-xi+1)2 .

  17. One-dimensional adaptive integrator which initially uses a closed embedded classical quadrature (allegedly more precise than an equivalen open), but switches to open quadratures on intervals where it encounters INFINITY from the integrand.

    Improvement: on intervals with INFINITY switch to Clenshaw-Curtis variable substitution algorithm.

  18. Quasi-Newton's root finding with Broyden's update

  19. ODE driver that checks that |δyk| < (δ+ε|yk|)√(h/(b-a)) where k=1...n counts the components of the vector-function y (that is, instead of the simpler condition ||δy|| < (δ+ε||y||)√(h/(b-a))) together with a two-and-one-third-step stepper (which uses one previous point and makes and auxiliary h/3 step).

  20. QR-decomposition using Givens rotations.

  21. Bi-linear interpolation on a regular grid in two dimensions. The signature of the interpolating subroutine can be

    double bilinear(int nx, int ny, double* x, double* y, double** z)

    or

    double bilinear(gsl_vector* x, gsl_vector* y, gsl_matrix* z)

  22. Quasi-Newton's minimization with Broyden's update

Voluntary projects

  1. Evolutionary minimization algorithm of your choice.

  2. Few smallest eigenvalues of a matrix using Lanczos tridiagonalization plus recursive diagonalization (amounts to rank-1 update).

  3. Multi-dimensional Monte Carlo integration with recursive stratified sampling. (Try use the largest |⟨f⟩left-⟨f⟩right| as the criterion for choosing the bisection direction instead of the largest max(varianceleft,varianceright).)

  4. Simulated annealing with downhill simplex or your other favourite local minimizer.

The functions you could use to test your 1D integrators