Clipping occurs when a detector (or an analog-to-digital converter, or an amplifier) attempts to process a signal that exceeds its dynamic range. This results in the signal being "flattened" or "clipped" at the maximum and/or minimum representable values of the detector. This distortion degrades the signal quality.
Declipping is the process of trying to reconstruct the original, unclipped signal from its clipped version.
Let us say that we are given a clipped signal, the size-`N` vector `y`, where `n` elements (with indices `m_1, m_2, ..., m_n`) are clipped (where `n < N`), that is, replaced with `y_max` if they exceeded `y_max` (or with `y_min` if they fell below `y_min`).
We need to find the size-`n` vector `z` of the reconstructed values that were clipped such that the total reconstructed vector `x` is given as:
That is, the reconstructed vector `x` is searched in the form
You should solve this least-squares minimization problem using your own implementation of the QR-factorization method as described in the lecture note.
vector declip(vector y, double y_min, double y_max)
that takes a clipped signal y and the clipping limits
y_min
and
y_max
returns the reconstructed signal x.