Yet another cubic sub-spline
Consider a data set {xi, yi}i=1,..,n which represents a tabulated function.
Implement a sub-spline of this data using the following algorithm:
See the subsection "Akima sub-spline interpolation" for the inspiration
Check whether it makes smaller wiggles than quadratic and cubic splines.
Cubic sub-spline for data with derivatives
For a data set, {xi, yi, y'i}i=1,..,n , where the function is tabulated together with its derivative — for example, as a result of a numerical integration of a second order ordinary differential equation — build a cubic sub-spline,See the subsection "Akima sub-spline interpolation" for the inspiration.
A possible extension: make the second derivative of the spline continuous by increasing the order of the spline to four, for example, in the form
Si(x)= yi +bi(x-xi) +ci(x-xi)2 +di(x-xi)3 +ei(x-xi)2(x-xi+1)2 ,
Akima sub-spline
Implement the Akima sub-spline. See the section "Akima sub-spline interpolation" in the book for the inspiration.
Bi-linear interpolation on a rectilinear grid in two dimensions
A rectilinear grid (note that rectilinear is not necessarily cartesian nor regular) in two dimensions is a set of nx×ny points where each point can be adressed by a double index (i,j) where 1≤i≤nx, 1≤j≤ny and the coordinates of the point (i,j) are given as (xi,yi), where x and y are vectors with sizes nx and ny correspondingly. The values of the tabulated function F at the grid points can then be arranged as a matrix {Fi,j=F(xi,yi)}.
Build an interpolating routine which takes as the input the vectors xi and yi, and the matrix Fi,j and returns the bi-linear interpolated value of the function at a given 2D-point p=(px,py).
See the chapter "Bi-linear interpolation" in the book.
The signature of the interpolating subroutine can be
double bilinear(int nx, int ny, double* x, double* y, double** F, double px, double py)
or
double bilinear(gsl_vector* x, gsl_vector* y, gsl_matrix* F, double px, double py)
Generalized eigenvalue problem
A generalized eigenvalue problem is the problem of finding all generalized eigenvectors x and the corresponding generalized eigenvalues λ which obeyImplement a routine for solving the generalized eigenvalue problem using the following algorithm:
Jacobi eigenvalue algorithm: several smallest (largest) eigenvalues.
Implement the variant of the Jacobi eigenvalue algorithm which calculates the given number n of the smallest (largest) eigenvalues and corresponding eigenvectors by consecutively zeroing the off-diagonal elements in the first (last) n rows of a real symmetrix matrix.
Hints: see exercise "matrix diagonalization" section B.
Tridiagonalization of a symmetric matrix: Jacobi (Givens) rotations
Consider a Jacobi rotation with the matrix J(p,q),
A→J(p,q)TAJ(p,q)
where the rotation angle is chosen not to zero the element Ap,q but rather to zero the element Ap-1,q. Argue that the following sequence of such Jacobi rotations,
J(2,3), J(2,4), ..., J(2,n); J(3,4), ..., J(3,n); ...; J(n-1,n)
reduces a symmetric matrix to a tridiagonal form. Implement this algorithm. Remember to accumulate the total transformation matrix.
Jacobi diagonalization with classic sweeps and indexing
Implement the classic Jacobi diagonalization algorithm:
A possible improvement -- indexing:
Inverse iteration algorithm for eigenvalues (and eigenvectors)
Well, just implement it.
See the chapter "Power iteration methids and Krylov subspaces" in the book.
Lanczos tridiagonalization algorithm
Implement the Lanczos algorithm (Lanczos interation) for real symmetric matrices.
Possible extention: Arnoldi iteration and GMRES.
Symmetric rank-1 update of a size-n symmetric eigenvalue problem
The matrix A to diagonalize is given in the form
A = D +uuT,
where D is a diagonal matrix and u is a column-vector.
Given the diagonal elements of the matrix D and the elements of the vector u find the eigenvalues of the matrix A using only O(n2) operations (see section "Eigenvalues of updated matrix" in the book).
Symmetric row/column update of a size-n symmetric eigenvalue problem
The matrix to diagonalize is given in the form
A = D + e(p) uT + u e(p)T
where D is a diagonal matrix with diagonal elements {dk, k=1,...,n}, u is a given column-vector, and the vector e(p) with components e(p)i=δip is a unit vector in the direction p where 1≤p≤n.
Given the diagonal elements of the matrix D, the vector u, and the integer p, calculate the eigenvalues of the matrix A using O(n2) operations (see section "Eigenvalues of updated matrix" in the book).
ODE: a two-step method
Implement a two-step method for solving ODE (as in the book).
ODE: a more advanced step-size control
In the homework we used the following simple condition to accept the trial step:
Implement an ODE driver that accepts the step if the set of following more refined conditions is satisfied,
When estimating the next step-size, instead of the factor (τ/‖δy‖) use the factor (mink |τk/δyk|).
See the chapter "ODE → Adaptive step-size control" in the book.
A possible extension: check whether this works better than our "simple condition" on a system where one of the components is much smaller that another component. For example, the predator-prey model where the parameters are chosen such that the number of predators is much smaller than the number of prey.
ODE: scaling of the errors
Investigate whether the errors on idividual steps are indeed uncorrelated and scale as square root:
Implement an ode-driver which caclulates the step-tolerance as
instead of more optimistic tolerance
which we used for the homeworks.
Investigate which driver provides a better estimate of the integration error by integrating an ODE with known solution and comparing the error estimated by the driver with the actual error.
ODE: Bogacki-Shampine stepper (with FSAL property)
Implement the (embedded) Bogacki-Shampine stepper (make sure that you utilise its FSAL property).
See also the ODE chapter in the book.
Quasi-Newton's minimization with DFP update.
Quasi-Newton's minimization with BFGS update.
ODE with complex-valued functions of complex variable
Generalize the ODE solver of your choice to solve ODE with complex-valued functions of complex variable.
Adaptive integration of complex-valued functions
Implement an adaptive integrator which calculates the integral of a complex-valued function f(z) of a complex variable z along a straight line between two points in the complex plane.
Adaptive integrator with subdivision into three subintervals.
Implement a (one-dimensional) adaptive integrator which at each iteration subdivides the interval not into two, but into three sub-intervals.
Adaptive numerical integration: error scaling
Investigate whether the errors on sub-intervals in the adaptive numerical integration are indeed uncorrelated.
Implement an adaptive integrator which under recursion rescales the absolute accuracy goal with factor 2 (rather than √2).
Investigate which algorithm (which factor 2 or with factor √2) provides a better estimate of the error by computing some integrals with known values and comparing the errors reported by the integrator with the actual errors.
Clenshaw-Curtis variable transformation quadrature
Implement an adaptive integrator which employs Clenshaw-Curtis variable substitution algorithm. Check whether it is more effective than your ordinary adaptive integrator.
Two-dimensional integrator
Implement a two-dimensional integrator for integrals in the form
∫abdx
∫d(x)u(x)dy
f(x,y)
which consecutively applies your favourite adaptive one-dimensional integrator along each of the two dimensions. The signature might be something like
double integ2D(double a, double b, double d(double x), double u(double x),
double f(double x, double y), double acc, double eps)
Multidimensional pseudo-random adaptive recursive integrator in a rectangular volume.
The integrator should be multi-dimensional but, for simplicity, non-stratified: simply subdivide the integration volume in all dimensions.
Do not waste points.
Check how far in the number of dimensions you can go before you overwhelm the computer (most probably running out of stack, but in any case up to 100M allocated memory and up to 5min "nice" run time).
Multidimensional pseudo-random (plain Monte Carlo) vs quasi-random (Halton and/or lattice sequence) integrators
Investigate the convergence rates (of some interesting integrals in different dimensions) as function of the number of sample points.
One-sided Jacobi Singular Value Decomposition (SVD) algorithm
Implement the one-sided Jacobi SVD algorithm as described in the "Eigenvalues" chapter of the book.
Here is a short summary:
The SVD of a (real square, for simplicity) matrix A is a representation of the matrix in the form
In the cyclic Jacobi eigenvalue algorithm for symmetric matrices we applied the elementary Jacobi transformation
where J ≐ J(p,q) is the Jacobi matrix where the angle is chosen to eliminate the off-diagonal elements Apq=Aqp to all upper off-diagonal matrix elements in cyclic sweeps until the matrix becomes diagonal.
The one-sided Jacobi SVD algorithm for general real square matrices is the same as the Jacobi eigenvalue algorithm, except that the elementary transformation is slightly different,
where
J ≐ J(p,q) is the Jacobi matrix where the angle is chosen to eliminate the off-diagonal elements A'pq=A'qp.
The matrices U and V are updated after each transformation as
Of course you should not multiply Jacobi matrices, which costs O(n³) operations, but rather perform updates which only cost O(n) operations.
Rootfinding: 1D complex vs. 2D real
Implement a (quasi) Newton method for rootfinding for a complex function f of complex variable z,
Hints: do it the ordinary way, just with complex numebers:
Compare the effectiveness of your complex implementation with your homework multi-dimensional implementation of real rootfinding applied to the equivalent 2D system of two real equation with two real variables x and y,
∫01
exp(x) dx
∫01
ln(x) dx
∫01
x1/2 dx
∫01
x3/2 dx
∫01
x-1/2 dx
∫01
(x>0.3?1:0) dx
∫03
int(exp(x)) dx