Yet another cubic sub-spline
Implement a (sub-)spline of a data set {xi, yi| i=1,..,n; p1, pn} --- where the user also supplies the derivatives p1 and pn at the end-points x1, xn --- using the following algorithm:
Check whether it makes smaller wiggles than quadratic and cubic splines.
Now, if the user supplies NAN (from math.h)
for one or both of the end-point derivatives, you have to estimate
p1 and/or pn from the the same cubic interpolating
polynomial that you used to determine correspondingly p2
and/or pn-1.,
Cubic sub-spline for data with derivatives
Build a cubic (sub-)spline,
Si(x)= yi +bi(x-xi) +ci(x-xi)2 +di(x-xi)3,
for a data set, {xi, yi, y'i| i=1,..,n}, where the function is tabulated together with its derivative (see the subsection 'Akima sub-spline interpolation' for the inspiration) -- for example, as a result of a numerical integration of a second order ordinary differential equation.
Now make the second derivative of the spline continuous by increasing the order of the spline to four, for example, in the form
Si(x)= yi +bi(x-xi) +ci(x-xi)2 +di(x-xi)3 +ei(x-xi)2(x-xi+1)2 .
Bi-linear interpolation on a rectilinear grid in two dimensions
A rectilinear grid (NB: rectilinear is not necessarily "Cartesian" or "regular") in two dimensions is a set of nx×ny points where each point can be adressed by a double index
(i,j | 1≤i≤nx, 1≤j≤ny)
and the coordinates of the point (i,j) are given as (xi,yi), where x and y are vectors with sizes nx and ny correspondingly. The values of the tabulated function F at the grid points can then be arranged as a matrix {Fi,j=F(xi,yi)}.
Build an interpolating routine which takes as the input the vectors xi and yi, and the matrix Fi,j and returns the interpolated value of the function at a given 2D-point p=(px,py).
The signature of the interpolating subroutine can be
double bilinear(int nx, int ny, double* x, double* y, double** F, double px,
double py)
or
double bilinear(gsl_vector* x, gsl_vector* y, gsl_matrix* F, double px,
double py)
Generalized eigenvalue problem
Ax=λNx,
where A is a real symmetric matrix and N is a real symmetric positive-defined matrix (all eigenvalues are positive).
For example, use the following algorithm:
N=VDVT,
where D is the diagonal matrix with (positive) eigenvalues of N on the diagonal and V is the matrix with the corresponding eigenvectors as its columns. Now the original generalized eigenvalue problem can be represented as an ordinary eigenvalue problem (prove it),
By=λy,
where the real symmetric matrix B=√D-1VTAV√D-1, and y=√DVTx .
Jacobi eigenvalue algorithm: one-by-one.
My Jacobi eigenvalue implementation (e.g. this one) seems to return eigenvalues sorted in ascending order (probably your implementation does the same thing). Explain why. What will the implementation do to a matrix which is already diagonal? With how many operations?
Make a minimal change to the code such that the implementation returns eigenvalues sorted in descending order.
Identify the sorting algorithm (e.g. from here).
Suppose you apply Jacobi rotations to all off-diagonal elements of only the first row of a (real symmetric) matrix repeatedly until the first diagonal element does not change anymore. What is the result of this procedure?
Implement the variant of the Jacobi eigenvalue algorithm which calculates the given number of the smallest eigenvalues (and eigenvectors) by consecutively zeroing the off-diagonal elements in the rows of the meatrix.
Lanczos tridiagonalization algorithm
Implement the Lanczos algorithm (Lanczos interation) for real symmetric matrices.
Tridiagonalization of a symmetric matrix: Jacobi (Givens) rotations
Consider a Jacobi rotation with the matrix J(p,q),
A→J(p,q)TAJ(p,q)
where the rotation angle is chosen not to zero the element Ap,q but rather to zero the element Ap-1,q. Argue that the following sequence of such Jacobi rotations,
J(2,3), J(2,4), ..., J(2,n); J(3,4), ..., J(3,n); ...; J(n-1,n)
reduces a symmetric matrix to a tridiagonal form. Implement this algorithm. Remember to accumulate the total transformation matrix.
Inverse iteration algorithm for eigenvalues (and eigenvectors)
Well, just implement it.
Symmetric rank-1 update of a size-n symmetric eigenvalue problem
The matrix A to diagonalize is given in the form
A = D +uuT,
where D is a diagonal matrix and u is a column-vector.
Given the diagonal elements of the matrix D and the elements of the vector u find the eigenvalues of the matrix A using only O(n2) operations (see section "Eigenvalues of updated matrix" in the book).
Symmetric row/column update of a size-n symmetric eigenvalue problem
The matrix to diagonalize is given in the form
A = D + e(p) uT + u e(p)T
where D is a diagonal matrix with diagonal elements {dk, k=1,...,n}, u is a given column-vector, and the vector e(p) with components e(p)i=δip is a unit vector in the direction p where 1≤p≤n.
Given the diagonal elements of the matrix D, the vector u, and the integer p, calculate the eigenvalues of the matrix A using O(n2) operations (see section "Eigenvalues of updated matrix" in the book).
Null-space of real matrix
Let us define the null-space of a real matrix A as the set of all linearly independent non-zero solutions to the homogeneous equation
Ax=0 .
Now, this equation can be rewritten as
(ATA)x=0
and thus the null-space of a matrix A consists of the eigenvectors of the matrix ATA that correspond to its zero eigenvalues.
Implement a routine which finds the null-space of a given real matrix A by finding zero-eigenvalues (within machine precision) and the corresponding eigenvectors of the matrix ATA.
NB: it is probably not the most effective way to find the null-space, but it will do as an exercise.
Eigenvalues of a complex Hermitian matrix
A complex Hermitian matrix H (H+=H) has real eigenvalues λ,
Hψ=λψ .
This complex matrix equation can be rewritten explicitly via real quantities and imaginary unit i,
(A+iB)(x+iy)=λ(x+iy),
where H=A+iB, ψ=x+iy (where AT=A, BT=-B).
The latter can be rewritten as two real (matrix) equations
Ax-By = λx Bx+Ay = λy
which is an eigenvalue equation for a (twice the size of H) real symmetrix matrix
A -B B A
with eigenvalues λ and (twice the size of ψ) eigenvectors
( x ) ( y )
The real symmetric matrix can be diagonalized by Jacobi eigenvalue algorithm.
Implement this idea.
ODE: a more advanced step-size control
Implement an ODE driver that accepts the step if the following conditions are satisfied,
|δyk| <
τk≡(δ+ε|yk|)√(h/(b-a)) , k=1...n
where n is the number of components of the vector-function
y.
(In the homework we used a simple condition,
||δy|| < τ≡(δ+ε||y||)√(h/(b-a)).)
When estimating the next step-size, instead of the factor
τ/||δy||
use the factor
mink|τk/δyk|.
Check whether this works better than our "simple condition" on a system where one of the components is much smaller that another component. For example, the predator-prey model where the parameters are chosen such that the number of predators is much smaller than the number of prey.
ODE: scaling of the errors
Investigate whether the errors on idividual steps are indeed are uncorrelated (and scale as square root):
Implement an ode-driver which caclulates the step-tolerance as
τ=(ε||y||+δ)(h/(b-a))
instead of more optimistic tolerance
τ=(ε||y||+δ)√(h/(b-a))
which we used for the homeworks.
Investigate which driver provides a better estimate of the integration error by integrating an ODE with known solution and comparing the error estimated by the driver with the actual error.
ODE: Bogacki-Shampine stepper (with FSAL property)
Implement the (embedded) Bogacki-Shampine stepper (make sure that you utilise its FSAL property).
ODE: two-step stepper with extra evaluation
Implement a two-step method with extra evaluation (see the corresponding chapter in the book) at point t=xi+αh, where 0<α≤1 is a parameter of the method.
Try to find out which value of the parameter α results in the most effective method.
Quasi-Newton's minimization with DFP update.
Quasi-Newton's minimization with BFGS update.
Adaptive integration of complex-valued functions
Implement an adaptive integrator which calculates the integral of a complex-valued function of a complex variable along a straight line in the complex plane.
Yet another adaptive integrator
Implement a (one-dimensional) adaptive integrator which at each iteration subdivides the interval not into two, but into three sub-intervals.
Adaptive numerical integration: error scaling
Investigate whether the errors on sub-intervals in the adaptive numerical integration are indeed uncorrelated.
Implement an adaptive integrator which under recursion rescales the absolute accuracy goal with factor 2 (rather than √2).
Investigate which algorithm (which factor 2 or with factor √2) provides a better estimate of the error by computing some integrals with known values and comparing the errors reported by the integrator with the actual errors.
Clenshaw-Curtis variable transformation quadrature
Implement an adaptive integrator which employs Clenshaw-Curtis variable substitution algorithm. Check whether it is more effective than your ordinary adaptive integrator.
Two-dimensional integrator
Implement a two-dimensional integrator for integrals in the form
∫abdx
∫d(x)u(x)dy
f(x,y)
which consecutively applies your favourite adaptive one-dimensional integrator along each of the two dimensions. The signature might be something like
double integ2D(double a, double b, double d(double x), double u(double x),
double f(double x, double y), double acc, double eps)
Multidimensional pseudo-random adaptive recursive integrator in a rectangular volume.
The integrator should be multi-dimensional but, for simplicity, non-stratified: simply subdivide the integration volume in all dimensions.
Do not waste points.
Check how far in the number of dimensions you can go before you overwhelm the computer (most probably running out of stack, but in any case up to 100M allocated memory and up to 5min "nice" run time).
Multidimensional pseudo-random (plain Monte Carlo) vs quasi-random (Halton and/or lattice sequence) integrators
Investigate the convergence rates (of some interesting integrals in different dimensions) as function of the number of sample points.
Multidimensional pseudo-random (plain Monte Carlo) integrator with asymptotic error estimate
Check whether this algorithm gives any improvement over plain Monte Carlo.
Try use the fitting function f(N)=c0+c1/√N+c2/N.
Multidimensional quasi-random integrator with asymptotic error estimate
(Although the integration errors with quasi-random numbers are not readily available, you can still promote the contributions from the estimates with larger samples by introducing artifitial errors inversly proportional to the number of sampling points.)
The coefficient c0 is apparently the asymptotic estimate of the integral (under the assumption that the integration error behaves as O(1/N)).
Try fitting more data-points; try using more term in the fitting function, say f(N)=c0+c1/N+c2/N2. );
∫01
exp(x) dx
∫01
ln(x) dx
∫01
x1/2 dx
∫01
x3/2 dx
∫01
x-1/2 dx
∫01
(x>0.3?1:0) dx
∫03
int(exp(x)) dx