Algorithms

_images/USPCB.jpg

Modulo samples encode the original input signal, and extracting this information requires an algorithmic decoding process. A variety of recovery algorithms have been developed, tailored to the specific sampling architecture and desired output—whether full signal reconstruction (unfolding) or extraction of certain parameters. In most of the existing algorithm, the modulo decomposition property is leveraged.

Modular Decomposition Property

Let \(g \in \mathcal{B}_\Omega\) and \(\lambda\) is a fixed, positive constant. Then, the bandlimited function \(g(t)\) admits a decomposition

\[g(t) = \mathscr{M}_\lambda (g(t)) + \varepsilon(t),\]

where \(\varepsilon_g\) is a simple function, referred to as the residual.

Notice that if the modulo part of a signal is known, then knowing its residual \(\varepsilon\) is equivalent to knowing \(g\). However, the fact that the residual has a piecewise constant structure can be used as a prior for its recovery.

Modulo Decomposition Property

Fig. 20 The Modulo Decomposition Property

US-ALG

This algorithm is the first one to appear for decoding of modulo samples produced by the ideal \(\mathscr{M}-\mathsf{ADC}\) architecture (so \(\varepsilon(t) \in 2\lambda{\mathbb{Z}}, \forall t\)). It was presented in [10], [1] and is guaranteed to recover a bandlimited signal from its modulo samples if the sampling rate satisfies the Unlimited Sensing Theorem. Notably, the sampling rate does not depend of the modulo threshold. The algorithmic procedure leverages the properties of the two signals that decompose the modulo signal:

  1. The smoothness of the bandlimited signal is used to separate it from the residual in the higher-order-differences space.

  2. The restriction of the residual amplitudes to the range \(2\lambda{\mathbb Z}\) is used to find the constants of integration, and to form a stable inversion process.

The key algorithmic steps are described below. For the full details: [10], [1].

US-Alg

  1. Compute \(\Delta^{N}y\), the \(N^{\mathrm{th}}\) order finite differences of the modulo sequence \(y[k]\).

  2. Apply \(\mathscr{M}_{\lambda}(\Delta^{N}y)\). | Since \(\mathscr{M}_{\lambda}(\Delta^{N}\epsilon)=0\), we get \(\mathscr{M}_{\lambda}(\Delta^{N}y) = \mathscr{M}_{\lambda}(\Delta^{N}g)\). For sampling rates that satisfy the Unlimited Sensing Theorem, that is \(T \leq 1 / (\Omega e)\), the maximal amplitude of the sequence \(g[k]\) shrink with the finite-differences order such that \(||\Delta^{N}g||_{\infty} \leq (T \Omega e)^{N}||g||_{\infty}\), where \(||\cdot||_{\infty}\) denotes the max-norm. If .. math:: N = leftlceil frac{log lambda - log beta_g}{log (T Omega e)} rightrceil , quad beta_g in 2lambda mathbb{Z} text{ and } beta_g geq | g |_infty, | then \(\| \Delta^{N}g \|_{\infty} \leq \lambda\). This implies \(\mathscr{M}_{\lambda}(\Delta^{N}y) = \Delta^{N}g\).
  3. Compute \(\Delta^{N}r[k] = \mathscr{M}_{\lambda}(\Delta^{N}y) - \Delta^{N}y\), the \(N^{\mathrm{th}}\) order finite differences of the residual sequence \(r[k] = \epsilon(kT)\).

  4. Set \(s_{(0)}[k] = \left( \Delta^{N}r \right)[k]\). | For \(n = 0 : N - 2\):
  5. \(s_{(n+1)}[k] = \mathsf{S}(s_{(n)})[k]\), where \(\mathsf{S} : \{s[k]\}_{k \in \mathbb{Z}^{+}} \rightarrow \sum_{m=1}^k s[m]\).

  6. \(s_{(n+1)}[k] = 2\lambda \left\lfloor \frac{s_{(n+1)} / \lambda}{2} \right\rfloor\) (rounding to \(2\lambda \mathbb{Z}\)).

  7. Compute \(\kappa_{(n)} = \left\lfloor \frac{\mathsf{S}^2(\Delta^{n}r)[1] - \mathsf{S}^2(\Delta^{n}r)\left[ \frac{6\beta_{g}}{\lambda} + 1 \right]}{12\beta_{g}} + \frac{1}{2} \right\rfloor\).

  8. \(s_{(n+1)}[k] = s_{(n+1)}[k] + 2\kappa_{(n)}\). | End
  9. \(\widetilde{\gamma}[k] = \mathsf{S}(s_{(N-1)})[k] + y[k] + 2m\lambda\), \(m \in \mathbb{Z}\)

US-Alg

Fig. 21 US-Alg

Code

Online Python Playground US-ALG

Hardware Validation

The input \(g\) consists of a sinusoidal mixture: weak component with \(0.3\) V at \(1\) kHz, and a strong component with \(2\) V at \(8\) kHz. The modulo ADC (MADC) based on [1], [6], [11], [12] is deployed with \(5.40\)-folds dynamic range extension, sampling rate of \(100\) kHz and quantization resolution of \(6.7\)-bit.

In this scenario, the MADC measurements (\(500\) samples) entails intensive modulo-folds (degrees-of-freedom of \(\varepsilon\) \(=330\)). Despite these challenging settings, we use the US-ALG for reconstruction with fourth-order difference or \(N=4\), achieving an signal recovery with a mean-squared error \(\propto 10^{-3}\).

_images/8K_USAlg.png

Fig. 22 Hardware Validation: Signal Reconstruction using US-ALG with Fourth-order difference.

MATLAB Live Demo for 8 kHz Experiment

Down the MATLAB Code for 8 kHz Hardware Experiment Reproduction

Generalized Modulo Model

The key idea in reconstructing of the original samples from measurements encoding via \(\mathscr{M}_\mathsf{H}\) is to convolve modulo samples \(y[n]\) with a filter \(\psi_N[k] \triangleq \Delta^N[k + 1]\) and identify the residual by thresholding.

\[\left ( \psi_N \ast y \right )[n] = \left ( \psi_N \ast \gamma \right ) [n] - \left ( \psi_N \ast \varepsilon_\gamma \right ) [n]\]

Theorem - Estimation of Folding Times

Assume that \({\psi_N \ast \gamma}_l^\infty < \lambda_h / (2N)\) and let \(k_m, k_m \in \mathbb{Z}\) defined as in (11). Furthermore, let \(\tilde{n}_1\) and \(\tilde{s}_1\) be the estimations of the discrete folding time and sign defined as

(4)\[\tilde{n}_1 = k_M - 1,\]
(5)\[\tilde{s}_1 = - \mathsf{sign} \left ( \left ( \psi_N \ast y \right ) [k_m] \right )\]
  1. If \(k_M - k_m = N\), consider the estimated folding time

(6)\[\tilde{r}_1 \triangleq \tilde{n}_1 T - \alpha \frac{ \left ( \psi_N \ast y \right )[\tilde{n}_1]}{2 \lambda_h \tilde{s}_1 N}\]

Then the following hold

(7)\[\tilde{n}_1 = n_1,\;\tilde{s}_1 = s_1,\;\left | \tilde{\tau}_1 - \tau_1 \right | < \frac{\alpha}{4 N^2}.\]

Folding parameters \(\tilde{\tau}_p\) and \(\tilde{s}_p\) can be sequentially computed with previous Theorem sequentially, if the supports of each of two filtered folds are disjoint. This requires the separation of the folding times \(n_{p+1} - n_p \geq N + 1\), which is guaranteed if

\[\frac{h^*}{T\Omega g_\infty} \geq N + 1\]

Denote \(p\) as index for folding index and let \(\mathcal{K}_N = \bigcup_{p \in [1, P]} \left\{ k_m^p, k_M^p \right\}\), where the pairs \(\left \{ k_m^p, k_M^p \right \}\) are computed sequentially as follows. For \(p = 1\)

(8)\[\begin{split}k_m^1 &= \mathsf{min} \left \{ k\;|\;\left | \left ( \psi_N \ast y \right ) [n] \right |\right \}, \\\\ k_M^1 &= \mathsf{max} \left \{ k \leq k_m^1 + N\;|\;\left | \left ( \psi_N \ast y \right ) [n] \right | \geq \frac{\lambda_b}{2N} \right \}.\end{split}\]

For \(p = 2, ..., P\)

(9)\[\begin{split}k_m^p &= \mathsf{min} \left \{ k > k_M^{p-1}\;|\;\left | \psi_N \ast y[n] \right | \geq \frac{\lambda_b}{2 N} \right \},\\\\ k_M^P &= \mathsf{max} \left \{ k \leq k_m^p + N\;|\;\left | \psi_N \ast y [n]\right | \geq \frac{\lambda_b}{2N} \right \}.\end{split}\]

Reconstruction via Thresholding

Input. \(y[n], \psi_N[n], \lambda, h, \alpha, \Omega > 0\), Output. \(\tilde{\gamma}[n]\)

  1. Compute \(\left ( \psi_N \ast y \right ) [n]\).

  2. Compute \(P, \left \{ k_m^p, k_M^p \right \}_{p = 1}^P\) with (8) and (9).

  3. For each fold \(p=1, ..., P\)
    1. If \(k_M^p - k_m^p = N\), compute \(\tilde{n}_p, \tilde{s}_p\) with (4), (5) and \(\tilde{\tau}_p = \tilde{\tau}_p^{n_p}\) with (6).

    2. If \(k_M^p - k_m^p = N - 1\), compute \(\tilde{n}_p, \tilde{s}_p\) with (4), (5) and \(\tilde{\tau}_p = \tilde{n}_pT\).

  4. Compute \(\tilde{\varepsilon}_\gamma[n] = \sum_{p=1}^P s_p \varepsilon_0 \left ( kT - \tilde{\tau}_p \right )\), where \(\varepsilon_0(t) = 2 \lambda_h \left [\frac{1}{\alpha}t \cdot {\mathbb I}_{[0, \alpha[} (t) + {\mathbb I}_{[\alpha, \infty[} (t) \right ]\)

  5. Compute reconstruction \(g[n] = y[n] + \tilde{\varepsilon}_g[n]\)

Unlimited Sampling with Local Averages

To estimate \(e[k]\) from \(y[k]\) we assume a smoothness property on \(g(t_k)\), i.e. \(||\Delta^N g(t_k) ||_\infty < \frac{\lambda_h}{2 N}\). Far away from the jump, this condition will also hold for \(y[k]\)

Estimation of Folding Times.

Assume \(||\Delta g(t_k)||_\infty < \lambda_h / 2N\) and let \(k_m = \mathsf{min} {\mathbb M}_N\). Furthermore, let \(\tilde{\tau}_1 = \tilde{n}_1 T - 2 \nu \beta_1 + \nu\) and \(\tilde{s}_1 = -\mathsf{sign}(\Delta^N y[k_m])\) be the estimates of the folding time with \(\tilde{n}_1 = k_m + N\) and \(\tilde{\beta}_1\) given by

\[\begin{split}\begin{cases} |\Delta^N y[k_m]| \leq 2 \lambda_h - \frac{\lambda_h}{2N}\quad\text{then}\;\tilde{\beta}_1 \rightarrow \frac{\Delta^N y[k_m]}{2 \lambda_h \tilde{s}_1}\;\\\\ |\Delta^N y[k_m]| > 2 \lambda_h - \frac{\lambda_h}{2N}\quad\text{then}\;\tilde{\beta}_1 \rightarrow 1 \end{cases}.\end{split}\]

Then \(\tilde{s}_1 = s_1, \tilde{n}_1 \in \{ n_1 , n_1 + 1 \}\), and the following holds

\[\begin{split}k_m = \begin{cases} n_1 - N & \tilde{n}_1 = n_1, | \tilde{\tau}_1 - \tau_1 | \leq \frac{\nu}{2N} \\\\ n_1 - N + 1\;&\tilde{n}_1 = n_1 + 1, | \tilde{\tau}_1 - \tau_1 | \leq T + \nu \left ( \frac{3}{4N} - \frac{1}{2} \right ) \\\\ \end{cases}.\end{split}\]

Sufficient Recovery Conditions

The folding times \(\tau_r\) and signs \(s_r\) can be recovered from average sampling measurements \(y[k]\) if \(\nu \leq T/2\) and the following hold

\[\left ( T \Omega e \right ) ^ N || g ||_\infty + \nu 2^N ||g||_\infty < \frac{\lambda_h}{2N}\]
\[T\Omega || g ||_\infty \left ( N + 1 \right ) \leq \mathsf{min} \{ h, 2 \lambda_h \}.\]

Recovery algorithms for Unlimited Sampling with local averages were considered in [7].

To estimate folding locations and signs \(\left \{ \tau_r, s_r \right \}\), we define a set \(\{ k_m^r \}_{r=1}^R\)

(10)\[k_m^1 = \mathsf{min} \left \{ k\;| y^{[N]}[k] \geq \frac{\lambda_h}{2N} \right \}\]
(11)\[k_m^r = \mathsf{min} \left \{ k > k_m^{r-1} + N\;| y^{[N]}[k] \geq \frac{\lambda_h}{2N} \right \}\]

Input Reconstruction. Input signal is recovered from \(\mathcal{L} \tilde{g}[k] = 2\nu \left ( y[k] + \tilde{\varepsilon}[kT] \right )\), where \(\mathcal{L} \tilde{g}[k] \triangleq \int_{kT - \nu}^{kT + \nu} \tilde{g}(s) ds\) are the local averages and

(12)\[\tilde{\varepsilon}_\nu (kT) = \sum\limits_{r \in {\mathbb Z}} s_r \left ( kT + \nu - \tilde{\tau}_r \right ) \frac{\lambda_h}{\nu} {\mathbb I}_{[-\nu, \nu]} \left ( t - \tilde{\tau}_r \right )\]

Average Sampling Recovery Algorithm

Input. \(y[k], N, \lambda, h, \nu, \Omega > 0\), Output. \(\tilde{g}(t)\)

  1. Compute \(\Delta^N y[k]\).

  2. Compute \(R, \{ k_m^r \}_{r=1}^R\) with (10) and (11).

  3. For \(r = 1, ..., R\)
    1. Compute \(\tilde{n}_r = k_m^r + N\).

    2. If \(|\Delta^N y[k_m^r]| > 2\lambda_h - \frac{\lambda_h}{2N}\), compute \(\tilde{\beta}_r = 1\).

    3. If \(|\Delta^N y[k_m^r]| \leq 2 \lambda_h - \frac{\lambda_h}{2 N}\), compute \(\tilde{\beta}_r = \frac{|\Delta^N y[k_m^r]|}{2\lambda_h}\).

    4. Compute \(\tilde{\tau}_r = \tilde{n}_r T - 2 \nu \tilde{\beta}_r + \nu\).

    5. Compute \(s_r = -\mathsf{sign}(\Delta^N y[k_m^r])\).

  4. Compute \(\tilde{\varepsilon}_\nu (kT)\) using (12).

  5. Compute \(\tilde{\mathbf{c}}\) using \(\tilde{\mathbf{c}} = \mathbf{M}^+ \mathbf{g}\).

  6. Compute \(\tilde{g}(t) = \sum\nolimits_{n \in {\mathbb Z}} [\tilde{\mathbf{c}}]_k \mathsf{sinc}_\Omega (t - kT)\).

One of the insights to allow for lower sampling rates is that \(\Delta^N y[k] = \Delta^N g(t_k) - \sum\nolimits_{r=1}^R e_{r,N}[k]\) where

(13)\[e_{r,N}[k] \triangleq b_0 \left ( \tilde{\beta}_r \Delta^{N-1} [k - \tilde{n}_r + 1] + \left ( 1 - \tilde{\beta}_r \right ) \Delta^{N-1} [k - \tilde{n}_r] \right )\]

Low Sampling Rate Recovery Algorithm

Input. \(y[k], N, \lambda, h, \nu, \Omega > 0\), Output. \(\tilde{g}(t)\)

  1. Compute \(y_{1,N}[k] = \Delta^N y[k]\).

  2. For \(r = 1, ..., R\)
    1. Compute \(k_m^r = \mathsf{min} \left \{ k\;|\; |y_{r,N}[k]| \geq \frac{\lambda_h}{2N} \right \}\).

    2. Compute \(\tilde{n}_r = k_m^r + N\).

    3. If \(|y_{r,N}[k_m^r]| > 2 \lambda_h - \frac{\lambda_h}{2N}\), compute \(\tilde{\beta}_r = 1\).

    4. If \(|y_{r,N}[k_m^r]| \leq 2\lambda_h - \frac{\lambda_h}{2N}\), compute \(\tilde{\beta}_r = \frac{\left | y_{r,N}[k_m^r] \right |}{2\lambda_h}\).

    5. Compute \(\tilde{\tau}_r = \tilde{n}_rT - 2 \nu \tilde{\beta}_r + \nu\).

    6. Compute \(s_r = -\mathsf{sign}\left ( \Delta^N y[k_m^r]\right )\).

    7. Compute \(e_{r,N}[k]\) using (13).

    8. If \(r < R\), compute \(y_{r+1, N}[k] = y_{r,N}[k] + e_{r,N}[k]\).

  3. Compute \(\tilde{\varepsilon}_\nu(kT)\) using (12).

  4. Compute \(\tilde{\mathbf{c}}\) using \(\tilde{\mathbf{c}} = \mathbf{M}^+\mathbf{g}\).

  5. Compute \(\tilde{g}(t) = \sum\nolimits_{n \in {\mathbb Z}} [\tilde{\mathbf{c}}]_k \mathsf{sinc}_\Omega (t - kT)\).

Code

Modulo \(\Sigma\Delta\) Quantizer

Recovery from One-Bit Modulo Samples

Input. \(q[n], \Omega\)

Output. \(\tilde{g}(t)\)

  1. Compute \(\left ( \Delta \ast \psi_h^N \right )[n]\) with smoothing kernel \(\psi^N(t) := B^N \left ( \frac{N}{2} t \right ) / \mathsf{max} \left ( B^N (t) \right )\), where \(B^N\) is a B-spline of order \(N\).

  2. Compute \(\mathscr{M}_\lambda\left ( ( \Delta q \ast \psi_h^N ) [n] \right ) - \left ( \Delta q \ast \psi_h^N \right )[n]\) and retain non-zero points to get \(\Delta\tilde{\varepsilon}_g[n]\).

  3. Add \(0\) at the beginning of \(\Delta\tilde{\varepsilon}_g[n]\) and apply summation operator \(\tilde{\varepsilon}_g[n] = \sum_{k = 0}^{n} \Delta \tilde{\varepsilon}_g[k]\).

  4. Obtain multi-bit measurements \(\tilde{q}_{\mathsf{MB}}[n] = q[n] + \tilde{\varepsilon}_g[n]\).

  5. Low-pass filter \(q_{\mathsf{MB}}[n]\) to obtain \(\tilde{g}(t)\).

Code

MATLAB Live Demo

Fourier-Prony

Fourier-based recovery algorithm for bandlimited and periodic signals was proposed in [13].

Acquisition

Signal Model. Consider \(g \in \mathcal{B}_\Omega\) such that \(g(t) = g(t + \tau), \forall t \in {\mathbb R}\):

\[g(t) = \sum\nolimits_{\left | p \right | \leq P} \hat{g}[p] e^{\jmath p \omega_0 t},\quad\omega_0 = \frac{2 \pi}{\tau},\quad P = \frac{\Omega}{\omega_0}\]

where \(\hat{g}[p]\) denotes the Fourier series coefficients such that \(\hat{g}[-p] = \hat{g}^*[p]\) with \(\sum\nolimits_{p}{\left | \hat{g}[p] \right |} ^2 < \infty\). Signal \(g(t)\) is then inputted into modulo ADC, sampled with a period \(T\) and giving \(y[n] = \mathscr{M}_{\lambda}(g(t))\).

Problem Formulation. Given modulo samples \(y[n]\), number of folds \(K\) and bandwidth \(\Omega\), reconstruct \(g[n]\).

Reconstruction

First apply first-order differences to \(y[n]\) to get

\[\underline{y}[n] = \Delta y[n] = y[n] - y[n - 1]\]

which can be rewritten using modular decomposition property [1] as

\[\underline{y}[n] = \underline{\gamma}[n] - \underline{r}[n] = \underline{\gamma}[n] - \sum\limits_{m \in \mathcal{M}} c[m] \delta (nT - t_m)\]

where \(\gamma[n] = g(nT)\).

For the recovery strategy, we leverage the outband information of the residue signal \(\underline{\varepsilon}[n]\).

Fourier Domain Partition

Discrete Fourier Transform of \(y[n]\) can be written as
\[\begin{split}\hat{\underline{y}}[p] = \begin{cases} \hat{\underline{\gamma}}[n] - \hat{\underline{r}}[p] & n \in ({\mathbb E}_{P,K-1}) + (K - 1){\mathbb Z},\\ - \hat{\underline{r}}[p] & p \in ({\mathbb I}_{K - 1} \ {\mathbb E}_{P,K - 1}) + (K - 1){\mathbb Z}. \end{cases}\end{split}\]
where
\[\hat{\underline{r}}[p] = \sum\limits_{m \in {\mathcal M}} c[m] \exp \left ({-\jmath \frac{\underline{\omega}_0 p t_m}{T}} \right ) = \sum\limits_{m \in {\mathcal M}} c[m] \exp \left ({-\jmath \frac{\underline{\omega}_0 p n_m}{N - 1}} \right )\]
This can be visually depicted by the following figure.
Fourier Domain Partitioning

Unknown residue parameters \(\{ c_m, n_m \}_{m = 0}^{M-1}\) can be estimated using Prony’s method from out of the band part of the spectra. Let \(H(z) = \sum\nolimits_{n=0}^M h[n]z^{-n} = \prod_{m=0}^{M-1} \left ( 1 - u_m z^{-1} \right )\) where \(u_m = \exp(-\jmath \frac{2 \pi n_m}{N - 1})\). Convolving out band part of spectra \(\hat{\underline{y}}[n]\) with filter coefficients \(\mathbf{h}\) annihilates the sequence for \(l = [K+P, N - P - 1]\) such that

\[\left ( h \ast \hat{\underline{r}} \right )[l] = \sum\limits_{m = 0}^{M-1} c[m] u_m^{l - P - 1} \sum\limits_{k=0}^{M} h[k] u_m^{-k} = 0\]

After finding the roots \(u_m\) unknown folding amplitudes can be found using least-squares method. Given residue parameters \(\{ c_m, n_m \}_{m=0}^{M-1}\) then residue reconstruction \(\underline{\tilde{r}}{n}\) can be obtain from the definition

\[\underline{\tilde{r}}[n] = \sum\limits_{m=0}^{M-1} c_m \delta[n - n_m].\]

Then append \(0\) to the beginning of the sequence and apply anti-difference operator such that

\[\tilde{r}[n] = \sum\limits_{m=0}^n \underline{\tilde{r}}[m], k \in {\mathbb I}.\]

Reconstruct \(\tilde{g}[n]\) via

\[\tilde{g}[n] = y[n] + \tilde{r}[n].\]

Output. Signal reconstruction \(\tilde{g}[n]\).

Code

MATLAB Live Demo

Online Python Playground FP

Time Encoding via Unlimited Sensing

Description of Time Encoding

Sub-Nyquist Frequency Estimation

Sub-Nyquist Frequency estimation via USF and the effect of quantization was studied in [14]. Two multi-channel architectures with modulo acquisition were proposed which enabled the capture of signals \(72\) times exceeding ADC’s range while achieving \(30000\) times in Nyquist rate reduction.

Spectral Estimation based on Gaussian Integers

MC-USF with complex-valued threshold.

Fig. 23 USF acquisition architecture with Gaussian integer thresholds that allows sub-Nyquist frequency estimation.

Signal Model. Let the input signal \(g(t)\) be a finite sum of \(K\) complex exponentials

\[g(t) = \sum_{k=0}^{K - 1} c_k e^{\jmath \omega_k t}, \omega_k = 2 \pi f_k\]

where signal parameters \(\{ f_k, c_k \}_{k=0}^{K-1}\) are unknown. \(g(t)\) is acquired using the multi-channel architecture depicted in Fig. 23 which implements thresholds based on Gaussian Integers.

Measurements. Low dynamic range measurements captured by the designed architecture can be described as

(14)\[\begin{split}\left [ \begin{array}{cc}v_0[n] & v_1[n] \\ v_2[n] & v_3[n]\end{array} \right ] = {\mathscr M} \left ( \left [ \begin{array}{c} g[n] \\ g_{{T_d}}[n] \end{array} \right ] \left [ \begin{array}{c} \cos \theta & \sin \theta \end{array} \right ] \right ).\end{split}\]

Measurements \(\{ v_l[n] \}_{l = 0}^{3}\) are related to \(g[n]\) and \(g_{T_d}[n]\) via systems of congruence equations

\[\begin{split}\begin{cases} g[n] &= \epsilon ( \theta_0[n] + \jmath \theta_1[n] ) (p + \jmath q) + r_0[n] \\ g[n] &= \epsilon ( \psi_0[n] + \jmath \psi_1[n] ) (p - \jmath q) + r_1[n] \end{cases}\end{split}\]

and

\[\begin{split}\begin{cases} g_{T_d}[n] &= \epsilon ( \theta_2[n] + \jmath \theta_3[n] ) (p + \jmath q) + r_2[n] \\ g_{T_d}[n] &= \epsilon ( \psi_2[n] + \jmath \psi_3[n] ) (p - \jmath q) + r_3[n] \end{cases}\end{split}\]

where \(\{ \theta_l[n], \psi_1[n] \}_{l=0}^{3} \in \mathbb{Z}^2\), \(r_0[n] = r_1^*[n] = (v_0[n] + \jmath v_1[n]) e^{-\jmath \theta}\) and \(r_2[n] = r_3^*[n] = (v_2[n] + \jmath v_3[n]) e^{-\jmath \theta}\).

Problem Statement

Given measurements \(\{ v_l[n] \}_{l=0}^{3}\) captured by the architecture in Fig. 23 with a threshold \(2 \rho e^{-\jmath\theta} = \epsilon (p + \jmath q)\). Then, \(g(t)\) can be exactly recovered with \(N_{l} \geqslant \left ( 2 - \lfloor \frac{l}{2} \rfloor \right ) K, l\in \mathbb{I}_{4}\) samples if \(\mathsf{GCD}(p, q) = 1, T_d \leqslant \frac{\pi}{\mathsf{max}|\omega_k|}\) and \(\left\| g(t) \right |_{\infty} < \epsilon \frac{p^2 + q^2}{2}\).

Recovery Algorithm

Input. \(\{ v_l[n] \}_{l=0}^{3}, p, q, T_d\)

Output. Signal parameters \(\{ f_k, c_k \}_{k=0}^{K - 1}\)

  1. Unfold \(g[n]\) and \(g_{T_d}[n]\) by solving congruence equations \(\epsilon \begin{bmatrix} p & -q & 0 & 0\\ q & p & 0 & 0\\0 & 0 & p & q \\ 0 & 0 & -q & p \\ \end{bmatrix} \begin{bmatrix} \theta_0\\ \theta_1\\ \phi_0\\ \phi_1 \end{bmatrix} + \begin{bmatrix}\mathsf{Re}(r_0[n])\\\mathsf{Im}(r_0[n])\\\mathsf{Re}(r_1[n])\\\mathsf{Im}(r_1[n])\end{bmatrix} = \begin{bmatrix}\mathsf{Re}(g[n])\\\mathsf{Im}(g[n])\\\mathsf{Re}(g[n])\\\mathsf{Im}(g[n])\end{bmatrix}\) and \(\epsilon \begin{bmatrix} p & -q & 0 & 0\\ q & p & 0 & 0\\0 & 0 & p & q \\ 0 & 0 & -q & p \\ \end{bmatrix} \begin{bmatrix} \theta_0\\ \theta_1\\ \phi_0\\ \phi_1 \end{bmatrix} + \begin{bmatrix}\mathsf{Re}(r_0[n])\\\mathsf{Im}(r_0[n])\\\mathsf{Re}(r_1[n])\\\mathsf{Im}(r_1[n])\end{bmatrix} = \begin{bmatrix}\mathsf{Re}(g[n])\\\mathsf{Im}(g[n])\\\mathsf{Re}(g[n])\\\mathsf{Im}(g[n])\end{bmatrix}\).

  2. Use Prony’s method to estimate folded frequencies \(f_k^\prime\).

  3. Use least squares to obtain estimates of \(c_k\).

  4. Use least squares to find the phase term \(e^{\jmath 2 \pi f_k T_d}\).

Sub-Nyquist USF Spectral Estimation via RCRT

MC-USF with real-valued threshold.

Fig. 24 USF acquisition architecture with real-valued thresholds (\(\beta_0 = \epsilon c_0 / 2\) and \(\beta_1 = \epsilon c_1 / 2\)) that allows sub-Nyquist frequency estimation.

Measurements.- Low dynamic range measurements captured by the designed architecture can be described as

\[\begin{split}\begin{cases} u_0[n] &= \mathscr{M}_{\beta_0}(g[n]) \quad u_1[n] = \mathscr{M}_{\beta_1}(g[n]) \\ u_2[n] &= \mathscr{M}_{\beta_0}(g[n]) \quad u_3[n] = \mathscr{M}_{\beta_1}(g[n]) \\ \end{cases}\end{split}\]

Given measurements \(\{ u_l[n] \}_{l=0}^{3}\) captured by the architecture in Fig. 24 with thresholds \(\beta_0\) and \(\beta_1\). Then, \(g(t)\) can be exactly recovered with \(N_{i} \geqslant \left ( 2 - \lfloor \frac{i}{2} \rfloor \right ) K, i\in \mathbb{I}_{4}\) samples provided that \(T_d \leqslant \frac{\pi}{\mathsf{max}_k |\omega_k|}\) and \(\left\| g(t) \right |_{\infty} < \epsilon \frac{\mathsf{c_0, c_1}}{2}\).

Recovery Algorithm

Input. \(\{ u_l[n] \}_{l=0}^{3}, \beta_0, \beta_1, T_d\)

Output. Signal parameters \(\{ f_k, c_k \}_{k=0}^{K - 1}\)

  1. Unfold \(g[n]\) and \(g_{T_d}[n]\) by solving congruence equations \(g[n] = 2 \beta_0 a_0[n] + u_0[n], g[n] = 2 \beta_1 a_1[n] + u_1[n]\) and \(g_{T_d}[n] = 2 \beta_0 a_2[n] + u_2[n], g_{T_d}[n] = 2 \beta_1 a_3[n] + u_3[n]\)

  2. Use Prony’s method to estimate folded frequencies \(f_k^\prime\).

  3. Use least squares to obtain estimates of \(c_k\).

  4. Use least squares to find the phase term \(e^{\jmath 2 \pi f_k T_d}\).

Example Code

MATLAB Live Demo

ITER-SiS

Iter-SiS (Iterative Signal Sieving) is a robust recovery method that operates in the canonical domain [5]. Its core idea is to separate the distinct signal components within the folded samples using an alternating minimization strategy.

Problem Statement

Let \(g \in \mathcal{B}_\Omega\) be a bandlimited signal. In practice, the noisy, folded samples are given by

\[y_{w}\left[n\right] = y[n] + w\left[n\right],\quad y[n] = \mathscr{M}_{\lambda}\left(g\left(nT\right)\right), \quad n = 0,\cdots,N-1\]

where \(T\) denotes the sampling period. \(w\) is the measurement noise characterized by \(\frac{1}{N}\sum\nolimits_{n=0}^{N-1} \left| w\left[n\right] \right|^{2} = \sigma^{2}\). Here, we specify the noise power rather than assuming a specific stochastic noise distrubution. Our goal is to reconstruct \(\{g[n]\}_{n=0}^{N-1}\) from \(\{y_{w}[n]\}_{n=0}^{N-1}\).

Recovery Method

Let \(\mathbb{I}_{N}=\{0,\cdots, N-1\}, N\in \mathbb{Z}^{+}\). From the modular decomposition property, we know that

(15)\[y[n] = g[n] - \varepsilon_{g}[n] \equiv g[n] - \sum\nolimits_{m=0}^{M-1} c_m u[n - n_{m}]\]

where \(u[\cdot]\) denotes the discrete unit step function. \(c_m\in 2\lambda\mathbb{Z}\) and \(n_{m} \in \mathbb{I}_{N}\) represent the folding threshold and instant, respectively. Denote by :math:’underline{y}[n]=y[n+1]-y[n]’ the finite difference of \(y[n]\). Then, taking the finite difference of (15) leads to,

(16)\[\underbrace{\underline{y}[n]}_{\mathsf{Modulo \; Samples}} = \underbrace{\underline{g}[n]\in\mathcal{B}_{\Omega}}_{\mathsf{Bandlimited \; Sieve}} - \quad\underbrace{\underline{\varepsilon}_{g}[n]\in2\lambda \mathbb{Z}}_{\mathsf{Sparse \; Sieve}},\]

where \(\underline{\varepsilon}_{g}[n] = \sum_{m=0}^{M-1} c_m \delta[n - n_{m}]\) and \(\delta[\cdot]\) is the discrete delta function.

Hence, the measurements \(y[n]\) can be decomposed as two constituent components: (1) bandlimited signal \(\underline{g}[n]\) and (2) sparse signal \(\underline{\varepsilon}_{g}[n]\). The distinction in signal structures facilitates the separation of the residue \(\underline{\varepsilon}_{g}[n]\), leading to the concept of a signal sieving strategy.

Energy Concentration. Within the finite observation window, \(\underline{g}[n]\) is characterized by enforcing energy concentration in the low-pass region, which is implemented as

(17)\[\underline{g}[n] \approxeq \underline{g}_{\Omega}[n] = (\underline{g}*\varphi_{\Omega}) [n] \quad \quad \mathsf{\small (Bandlimited \;\; Projection)}.\]

Sparse Characterization on the Continuum. To achieve accurate spike estimation, we utilize a continuous-time, parametric representation of the residue,

(18)\[\underline{\varepsilon}_{g}[n] = \sum\nolimits_{m=0}^{M-1} \frac{\breve{c}_{m}}{1 - z_m^{-1} \xi_{N-1}^{n}} \equiv \sum\nolimits_{m=0}^{M-1} \frac{\mathsf{P}(\xi_{N-1}^{n})}{\mathsf{Q}(\xi_{N-1}^{n})}\]

where

━━● \(\breve{c}_{m} = (1 - e^{-\jmath 2\pi n_{m}}c_m/(N - 1) ` and :math:`z_m = e^{\jmath \frac{2\pi n_{m}}{N - 1}}\).

━━● \(\mathsf{P}(\xi_{N-1}^{n})\) and \(\mathsf{Q}(\xi_{N-1}^{n})\) are trignometric polynomials of degree \(M-1\) and \(M\), respectively with \(\xi_{N-1}^{n} = e^{\jmath \frac{2\pi n}{N - 1}}\).

Hence, in the presence of noise, the signal recovery problem can be formulated as,

(19)\[\begin{split}{\min}\limits_{\{\mathsf{P},\; \mathsf{Q},\; \underline{g}_{\Omega} \}} \quad \sum\nolimits_{n = 0}^{N - 2} {{{\left| { \underline{g}_{\Omega}[n] - \underline{y}_{w}\left[n\right] - \frac{\mathsf{P}(\xi_{N-1}^{n})}{\mathsf{Q}(\xi_{N-1}^{n})} }\right|}^2}},\quad \mbox{s.t.} \quad \begin{array}{lll} \underline{g}_{\Omega} = \underline{g}_{\Omega} * \varphi_{\Omega}\\ c_m\in 2\lambda\mathbb{Z} \end{array}.\end{split}\]

Alternating Minimization Strategy

We decouple (19) into two tractable sub-problems:

Sub-Problem P1. Asssuming \(\underline{g}_{\Omega}\) is known, then (19) reduces to

(20)\[{\min}\limits_{\{\mathsf{P},\; \mathsf{Q} \}} \quad \sum\nolimits_{n = 0}^{N - 2} {{{\left| { \widetilde{\underline{\varepsilon}}_{g}[n] - \frac{\mathsf{P}(\xi_{N-1}^{n})}{\mathsf{Q}(\xi_{N-1}^{n})} }\right|}^2}}, \quad \mbox{where}\quad \widetilde{\underline{\varepsilon}}_{g}[n] = 2\lambda \left\lfloor ({\underline{g}_{\Omega}[n] - \underline{y}_{w}\left[n\right] + \lambda})/({2\lambda}) \right\rfloor.\]

where \(\left\lfloor \cdot \right\rfloor\) is the floor function. To solve (20), we adopt an iterative strategy, namely we construct a collection of estiamtes for \(\mathsf{Q}\) and select the one that minimizes the mean-square error (MSE) in (20). These estimates are found iteratively by solving the minimization below (assuming \(\mathsf{Q}^{[j]} \approx \mathsf{Q}\))

(21)\[\{\mathsf{P}^{[j+1]}, \mathsf{Q}^{[j+1]}\} = {\min}\limits_{\{\mathsf{P},\; \mathsf{Q} \}} \quad \sum\nolimits_{n = 0}^{N - 2} {{{\left| { \frac{\widetilde{\underline{\varepsilon}}_{g}[n] \mathsf{Q}(\xi_{N-1}^{n}) - \mathsf{P}(\xi_{N-1}^{n})}{ \mathsf{Q}^{[j]} (\xi_{N-1}^{n})} }\right|}^2}}, \quad j\in \mathbb{I}_{j_{\max}}.\]

Once \(\{\mathsf{P}, \mathsf{Q}\}\) is known, the residue \({\underline{\varepsilon}}_{g}[n]\) can be recovered via (18).

Sub-Problem P2. With \({\underline{\varepsilon}}_{g}[n]\) known, (19) boils down to

\[{\min}\limits_{\{\underline{g}_{\Omega} \}} \quad \sum\nolimits_{n = 0}^{N - 2} {{{\left| { \widetilde{\underline{g}}_{\Omega}[n] - \underline{g}_{\Omega} }\right|}^2}}, \quad \mbox{where} \quad \widetilde{\underline{g}}_{\Omega}[n] = \underline{y}_{w}\left[n\right] + {\underline{\varepsilon}}_{g}[n], \quad \underline{g}_{\Omega} = \underline{g}_{\Omega} * \varphi_{\Omega}\]

which is a convex problem with closed-form solution given by

(22)\[\underline{g}_{\Omega} = \widetilde{\underline{g}}_{\Omega}[n] * \varphi_{\Omega}.\]

Stopping Criterion. We update the estimates \(\{{\underline{\varepsilon}}_{g}^{[i]}[n], \underline{g}_{\Omega}^{[i]}\}, i\in\mathbb{I}_{i_{\max}}\) through iterating between (21) and (22), until the following criterion is satisfied

(23)\[\frac{1}{N}\sum\nolimits_{n = 0}^{N - 1} {{{\left| { {{g}}_{\Omega}^{[i]}[n] - \varepsilon_{g}^{[i]}[n] - y_{w}[n] }\right|}^2}} \leqslant \sigma^{2}, \;\; \mbox{where} \;\; \{ {{g}}_{\Omega}^{[i]}, \; \varepsilon_{g}^{[i]} \} = \Delta^{-1} \{ \underline{g}_{\Omega}^{[i]},\; \underline{\varepsilon}_{g}^{[i]} \}.\]

This shows that the signal estimates match the original folded measurements within the given tolerance level \(\sigma\). The figure below describes the visual output of the signal sieving process.

_images/Algorithm_v1.png

Fig. 25 Flowchart of the iterative signal sieving.

The procedure is summarized below:

Iterative Signal Sieving Algorithm

Input. \(\{y_{w}[n]\}_{n=0}^{N-1}, \lambda\) and \(\Omega\). Initialization. \(\varepsilon_{g}^{[0]} \leftarrow y_{w}\).

  1. Compute \(\{ \underline{y}_{w},\underline{\varepsilon}_{g}\}\).

  2. For \(i=1,\cdots,i_{\max}\)

  3. Update \({\underline{\varepsilon}}_{g}^{[i]}[n]\) via (21).

  4. Update \(\underline{g}_{\Omega}^{[i]}\) via (22).

  5. If (23) is satisfied, terminate all loops; else, return to Step 3.

Output. Recovered signal \(\{g[n]\}_{n=0}^{N-1}\).

Example Code

MATLAB Live Demo

Down the MATLAB Code for IterSiS