.. _algorithms: Algorithms ========== .. image:: hardware/images/USPCB.jpg :class: round-image Modulo samples encode the original input signal, and extracting this information requires an algorithmic decoding process. A variety of recovery algorithms have been developed, tailored to the specific sampling architecture and desired output—whether full signal reconstruction (unfolding) or extraction of certain parameters. In most of the existing algorithm, the modulo decomposition property is leveraged. .. admonition:: Modular Decomposition Property Let :math:`g \in \mathcal{B}_\Omega` and :math:`\lambda` is a fixed, positive constant. Then, the bandlimited function :math:`g(t)` admits a decomposition .. math:: g(t) = \mathscr{M}_\lambda (g(t)) + \varepsilon(t), where :math:`\varepsilon_g` is a simple function, referred to as the **residual**. Notice that if the modulo part of a signal is known, then knowing its residual :math:`\varepsilon` is equivalent to knowing :math:`g`. However, the fact that the residual has a piecewise constant structure can be used as a prior for its recovery. .. figure:: _static/img/ModDecomp.png :align: center :alt: Modulo Decomposition Property The Modulo Decomposition Property .. _us-alg: US-ALG __________________________________ This algorithm is the first one to appear for decoding of modulo samples produced by the ideal :math:`\mathscr{M}-\mathsf{ADC}` architecture (so :math:`\varepsilon(t) \in 2\lambda{\mathbb{Z}}, \forall t`). It was presented in :cite:p:`Bhandari:2017:C`, :cite:p:`Bhandari:2021:J` and is guaranteed to recover a bandlimited signal from its modulo samples if the sampling rate satisfies the Unlimited Sensing Theorem. Notably, the sampling rate does not depend of the modulo threshold. The algorithmic procedure leverages the properties of the two signals that decompose the modulo signal: #. The smoothness of the bandlimited signal is used to separate it from the residual in the higher-order-differences space. #. The restriction of the residual amplitudes to the range :math:`2\lambda{\mathbb Z}` is used to find the constants of integration, and to form a stable inversion process. The key algorithmic steps are described below. For the full details: :cite:p:`Bhandari:2017:C`, :cite:p:`Bhandari:2021:J`. .. admonition:: US-Alg #. Compute :math:`\Delta^{N}y`, the :math:`N^{\mathrm{th}}` order finite differences of the modulo sequence :math:`y[k]`. #. | Apply :math:`\mathscr{M}_{\lambda}(\Delta^{N}y)`. | Since :math:`\mathscr{M}_{\lambda}(\Delta^{N}\epsilon)=0`, we get :math:`\mathscr{M}_{\lambda}(\Delta^{N}y) = \mathscr{M}_{\lambda}(\Delta^{N}g)`. For sampling rates that satisfy the Unlimited Sensing Theorem, that is :math:`T \leq 1 / (\Omega e)`, the maximal amplitude of the sequence :math:`g[k]` shrink with the finite-differences order such that :math:`||\Delta^{N}g||_{\infty} \leq (T \Omega e)^{N}||g||_{\infty}`, where :math:`||\cdot||_{\infty}` denotes the max-norm. If .. math:: N = \left\lceil \frac{\log \lambda - \log \beta_g}{\log (T \Omega e)} \right\rceil , \quad \beta_g \in 2\lambda \mathbb{Z} \text{ and } \beta_g \geq \| g \|_\infty, | then :math:`\| \Delta^{N}g \|_{\infty} \leq \lambda`. This implies :math:`\mathscr{M}_{\lambda}(\Delta^{N}y) = \Delta^{N}g`. #. Compute :math:`\Delta^{N}r[k] = \mathscr{M}_{\lambda}(\Delta^{N}y) - \Delta^{N}y`, the :math:`N^{\mathrm{th}}` order finite differences of the residual sequence :math:`r[k] = \epsilon(kT)`. #. | Set :math:`s_{(0)}[k] = \left( \Delta^{N}r \right)[k]`. | For :math:`n = 0 : N - 2`: #. :math:`s_{(n+1)}[k] = \mathsf{S}(s_{(n)})[k]`, where :math:`\mathsf{S} : \{s[k]\}_{k \in \mathbb{Z}^{+}} \rightarrow \sum_{m=1}^k s[m]`. #. :math:`s_{(n+1)}[k] = 2\lambda \left\lfloor \frac{s_{(n+1)} / \lambda}{2} \right\rfloor` (rounding to :math:`2\lambda \mathbb{Z}`). #. Compute :math:`\kappa_{(n)} = \left\lfloor \frac{\mathsf{S}^2(\Delta^{n}r)[1] - \mathsf{S}^2(\Delta^{n}r)\left[ \frac{6\beta_{g}}{\lambda} + 1 \right]}{12\beta_{g}} + \frac{1}{2} \right\rfloor`. #. | :math:`s_{(n+1)}[k] = s_{(n+1)}[k] + 2\kappa_{(n)}`. | End #. :math:`\widetilde{\gamma}[k] = \mathsf{S}(s_{(N-1)})[k] + y[k] + 2m\lambda`, :math:`m \in \mathbb{Z}` .. figure:: _static/img/US-Alg1.png :align: center :alt: US-Alg :width: 550 US-Alg Code ++++ `Online Python Playground US-ALG `_ Hardware Validation +++++++++++++++++++ The input :math:`g` consists of a sinusoidal mixture: weak component with :math:`0.3` V at :math:`1` kHz, and a strong component with :math:`2` V at :math:`8` kHz. The modulo ADC (MADC) based on :cite:p:`Bhandari:2021:J`, :cite:p:`Florescu:2022:Ja`, :cite:p:`Guo:2024:J`, :cite:p:`Bhandari:2022:J` is deployed with :math:`5.40`-folds dynamic range extension, sampling rate of :math:`100` kHz and quantization resolution of :math:`6.7`-bit. In this scenario, the MADC measurements (:math:`500` samples) entails intensive modulo-folds (degrees-of-freedom of :math:`\varepsilon` :math:`=330`). Despite these challenging settings, we use the US-ALG for reconstruction with **fourth-order** difference or :math:`N=4`, achieving an signal recovery with a mean-squared error :math:`\propto 10^{-3}`. .. figure:: _static/img/8K_USAlg.png :align: center :width: 90% Hardware Validation: Signal Reconstruction using US-ALG with Fourth-order difference. `MATLAB Live Demo for 8 kHz Experiment <_static/html/ReproduceResult.html>`_ :download:`Down the MATLAB Code for 8 kHz Hardware Experiment Reproduction <_static/code/8K_code_data.zip>` Generalized Modulo Model __________________________________ The key idea in reconstructing of the original samples from measurements encoding via :math:`\mathscr{M}_\mathsf{H}` is to convolve modulo samples :math:`y[n]` with a filter :math:`\psi_N[k] \triangleq \Delta^N[k + 1]` and identify the residual by thresholding. .. math:: \left ( \psi_N \ast y \right )[n] = \left ( \psi_N \ast \gamma \right ) [n] - \left ( \psi_N \ast \varepsilon_\gamma \right ) [n] .. admonition:: Theorem - Estimation of Folding Times Assume that :math:`{\psi_N \ast \gamma}_l^\infty < \lambda_h / (2N)` and let :math:`k_m, k_m \in \mathbb{Z}` defined as in (11). Furthermore, let :math:`\tilde{n}_1` and :math:`\tilde{s}_1` be the estimations of the discrete folding time and sign defined as .. math:: \tilde{n}_1 = k_M - 1, :label: n_tilde_1 .. math:: \tilde{s}_1 = - \mathsf{sign} \left ( \left ( \psi_N \ast y \right ) [k_m] \right ) :label: s_tilde_1 a) If :math:`k_M - k_m = N`, consider the estimated folding time .. math:: \tilde{r}_1 \triangleq \tilde{n}_1 T - \alpha \frac{ \left ( \psi_N \ast y \right )[\tilde{n}_1]}{2 \lambda_h \tilde{s}_1 N} :label: fold_est_1 Then the following hold .. math:: \tilde{n}_1 = n_1,\;\tilde{s}_1 = s_1,\;\left | \tilde{\tau}_1 - \tau_1 \right | < \frac{\alpha}{4 N^2}. :label: fold_cond_1 Folding parameters :math:`\tilde{\tau}_p` and :math:`\tilde{s}_p` can be sequentially computed with previous Theorem sequentially, if the supports of each of two filtered folds are disjoint. This requires the separation of the folding times :math:`n_{p+1} - n_p \geq N + 1`, which is guaranteed if .. math:: \frac{h^*}{T\Omega g_\infty} \geq N + 1 Denote :math:`p` as index for folding index and let :math:`\mathcal{K}_N = \bigcup_{p \in [1, P]} \left\{ k_m^p, k_M^p \right\}`, where the pairs :math:`\left \{ k_m^p, k_M^p \right \}` are computed sequentially as follows. For :math:`p = 1` .. math:: k_m^1 &= \mathsf{min} \left \{ k\;|\;\left | \left ( \psi_N \ast y \right ) [n] \right |\right \}, \\\\ k_M^1 &= \mathsf{max} \left \{ k \leq k_m^1 + N\;|\;\left | \left ( \psi_N \ast y \right ) [n] \right | \geq \frac{\lambda_b}{2N} \right \}. :label: p1_fold For :math:`p = 2, ..., P` .. math:: k_m^p &= \mathsf{min} \left \{ k > k_M^{p-1}\;|\;\left | \psi_N \ast y[n] \right | \geq \frac{\lambda_b}{2 N} \right \},\\\\ k_M^P &= \mathsf{max} \left \{ k \leq k_m^p + N\;|\;\left | \psi_N \ast y [n]\right | \geq \frac{\lambda_b}{2N} \right \}. :label: p_mult_fold .. admonition:: Reconstruction via Thresholding **Input.** :math:`y[n], \psi_N[n], \lambda, h, \alpha, \Omega > 0`, **Output.** :math:`\tilde{\gamma}[n]` #. Compute :math:`\left ( \psi_N \ast y \right ) [n]`. #. Compute :math:`P, \left \{ k_m^p, k_M^p \right \}_{p = 1}^P` with :math:numref:`p1_fold` and :math:numref:`p_mult_fold`. #. For each fold :math:`p=1, ..., P` #. If :math:`k_M^p - k_m^p = N`, compute :math:`\tilde{n}_p, \tilde{s}_p` with :math:numref:`n_tilde_1`, :math:numref:`s_tilde_1` and :math:`\tilde{\tau}_p = \tilde{\tau}_p^{n_p}` with :math:numref:`fold_est_1`. #. If :math:`k_M^p - k_m^p = N - 1`, compute :math:`\tilde{n}_p, \tilde{s}_p` with :math:numref:`n_tilde_1`, :math:numref:`s_tilde_1` and :math:`\tilde{\tau}_p = \tilde{n}_pT`. #. Compute :math:`\tilde{\varepsilon}_\gamma[n] = \sum_{p=1}^P s_p \varepsilon_0 \left ( kT - \tilde{\tau}_p \right )`, where :math:`\varepsilon_0(t) = 2 \lambda_h \left [\frac{1}{\alpha}t \cdot {\mathbb I}_{[0, \alpha[} (t) + {\mathbb I}_{[\alpha, \infty[} (t) \right ]` #. Compute reconstruction :math:`g[n] = y[n] + \tilde{\varepsilon}_g[n]` Unlimited Sampling with Local Averages ________________________________________ To estimate :math:`e[k]` from :math:`y[k]` we assume a smoothness property on :math:`g(t_k)`, i.e. :math:`||\Delta^N g(t_k) ||_\infty < \frac{\lambda_h}{2 N}`. Far away from the jump, this condition will also hold for :math:`y[k]` .. admonition:: Estimation of Folding Times. Assume :math:`||\Delta g(t_k)||_\infty < \lambda_h / 2N` and let :math:`k_m = \mathsf{min} {\mathbb M}_N`. Furthermore, let :math:`\tilde{\tau}_1 = \tilde{n}_1 T - 2 \nu \beta_1 + \nu` and :math:`\tilde{s}_1 = -\mathsf{sign}(\Delta^N y[k_m])` be the estimates of the folding time with :math:`\tilde{n}_1 = k_m + N` and :math:`\tilde{\beta}_1` given by .. math:: \begin{cases} |\Delta^N y[k_m]| \leq 2 \lambda_h - \frac{\lambda_h}{2N}\quad\text{then}\;\tilde{\beta}_1 \rightarrow \frac{\Delta^N y[k_m]}{2 \lambda_h \tilde{s}_1}\;\\\\ |\Delta^N y[k_m]| > 2 \lambda_h - \frac{\lambda_h}{2N}\quad\text{then}\;\tilde{\beta}_1 \rightarrow 1 \end{cases}. Then :math:`\tilde{s}_1 = s_1, \tilde{n}_1 \in \{ n_1 , n_1 + 1 \}`, and the following holds .. math:: k_m = \begin{cases} n_1 - N & \tilde{n}_1 = n_1, | \tilde{\tau}_1 - \tau_1 | \leq \frac{\nu}{2N} \\\\ n_1 - N + 1\;&\tilde{n}_1 = n_1 + 1, | \tilde{\tau}_1 - \tau_1 | \leq T + \nu \left ( \frac{3}{4N} - \frac{1}{2} \right ) \\\\ \end{cases}. .. admonition:: Sufficient Recovery Conditions The folding times :math:`\tau_r` and signs :math:`s_r` can be recovered from average sampling measurements :math:`y[k]` if :math:`\nu \leq T/2` and the following hold .. math:: \left ( T \Omega e \right ) ^ N || g ||_\infty + \nu 2^N ||g||_\infty < \frac{\lambda_h}{2N} .. math:: T\Omega || g ||_\infty \left ( N + 1 \right ) \leq \mathsf{min} \{ h, 2 \lambda_h \}. Recovery algorithms for Unlimited Sampling with local averages were considered in :cite:p:`Florescu:2022:C`. To estimate folding locations and signs :math:`\left \{ \tau_r, s_r \right \}`, we define a set :math:`\{ k_m^r \}_{r=1}^R` .. math:: k_m^1 = \mathsf{min} \left \{ k\;| y^{[N]}[k] \geq \frac{\lambda_h}{2N} \right \} :label: k_m_1 .. math:: k_m^r = \mathsf{min} \left \{ k > k_m^{r-1} + N\;| y^{[N]}[k] \geq \frac{\lambda_h}{2N} \right \} :label: k_m_r **Input Reconstruction.** Input signal is recovered from :math:`\mathcal{L} \tilde{g}[k] = 2\nu \left ( y[k] + \tilde{\varepsilon}[kT] \right )`, where :math:`\mathcal{L} \tilde{g}[k] \triangleq \int_{kT - \nu}^{kT + \nu} \tilde{g}(s) ds` are the local averages and .. math:: \tilde{\varepsilon}_\nu (kT) = \sum\limits_{r \in {\mathbb Z}} s_r \left ( kT + \nu - \tilde{\tau}_r \right ) \frac{\lambda_h}{\nu} {\mathbb I}_{[-\nu, \nu]} \left ( t - \tilde{\tau}_r \right ) :label: res_rec_la .. admonition:: Average Sampling Recovery Algorithm **Input.** :math:`y[k], N, \lambda, h, \nu, \Omega > 0`, **Output.** :math:`\tilde{g}(t)` #. Compute :math:`\Delta^N y[k]`. #. Compute :math:`R, \{ k_m^r \}_{r=1}^R` with :math:numref:`k_m_1` and :math:numref:`k_m_r`. #. For :math:`r = 1, ..., R` #. Compute :math:`\tilde{n}_r = k_m^r + N`. #. If :math:`|\Delta^N y[k_m^r]| > 2\lambda_h - \frac{\lambda_h}{2N}`, compute :math:`\tilde{\beta}_r = 1`. #. If :math:`|\Delta^N y[k_m^r]| \leq 2 \lambda_h - \frac{\lambda_h}{2 N}`, compute :math:`\tilde{\beta}_r = \frac{|\Delta^N y[k_m^r]|}{2\lambda_h}`. #. Compute :math:`\tilde{\tau}_r = \tilde{n}_r T - 2 \nu \tilde{\beta}_r + \nu`. #. Compute :math:`s_r = -\mathsf{sign}(\Delta^N y[k_m^r])`. #. Compute :math:`\tilde{\varepsilon}_\nu (kT)` using :math:numref:`res_rec_la`. #. Compute :math:`\tilde{\mathbf{c}}` using :math:`\tilde{\mathbf{c}} = \mathbf{M}^+ \mathbf{g}`. #. Compute :math:`\tilde{g}(t) = \sum\nolimits_{n \in {\mathbb Z}} [\tilde{\mathbf{c}}]_k \mathsf{sinc}_\Omega (t - kT)`. One of the insights to allow for lower sampling rates is that :math:`\Delta^N y[k] = \Delta^N g(t_k) - \sum\nolimits_{r=1}^R e_{r,N}[k]` where .. math:: e_{r,N}[k] \triangleq b_0 \left ( \tilde{\beta}_r \Delta^{N-1} [k - \tilde{n}_r + 1] + \left ( 1 - \tilde{\beta}_r \right ) \Delta^{N-1} [k - \tilde{n}_r] \right ) :label: e_r_N .. admonition:: Low Sampling Rate Recovery Algorithm **Input.** :math:`y[k], N, \lambda, h, \nu, \Omega > 0`, **Output.** :math:`\tilde{g}(t)` #. Compute :math:`y_{1,N}[k] = \Delta^N y[k]`. #. For :math:`r = 1, ..., R` #. Compute :math:`k_m^r = \mathsf{min} \left \{ k\;|\; |y_{r,N}[k]| \geq \frac{\lambda_h}{2N} \right \}`. #. Compute :math:`\tilde{n}_r = k_m^r + N`. #. If :math:`|y_{r,N}[k_m^r]| > 2 \lambda_h - \frac{\lambda_h}{2N}`, compute :math:`\tilde{\beta}_r = 1`. #. If :math:`|y_{r,N}[k_m^r]| \leq 2\lambda_h - \frac{\lambda_h}{2N}`, compute :math:`\tilde{\beta}_r = \frac{\left | y_{r,N}[k_m^r] \right |}{2\lambda_h}`. #. Compute :math:`\tilde{\tau}_r = \tilde{n}_rT - 2 \nu \tilde{\beta}_r + \nu`. #. Compute :math:`s_r = -\mathsf{sign}\left ( \Delta^N y[k_m^r]\right )`. #. Compute :math:`e_{r,N}[k]` using :math:numref:`e_r_N`. #. If :math:`r < R`, compute :math:`y_{r+1, N}[k] = y_{r,N}[k] + e_{r,N}[k]`. #. Compute :math:`\tilde{\varepsilon}_\nu(kT)` using :math:numref:`res_rec_la`. #. Compute :math:`\tilde{\mathbf{c}}` using :math:`\tilde{\mathbf{c}} = \mathbf{M}^+\mathbf{g}`. #. Compute :math:`\tilde{g}(t) = \sum\nolimits_{n \in {\mathbb Z}} [\tilde{\mathbf{c}}]_k \mathsf{sinc}_\Omega (t - kT)`. Code ++++ Modulo :math:`\Sigma\Delta` Quantizer _______________________________________ .. admonition:: Recovery from One-Bit Modulo Samples **Input.** :math:`q[n], \Omega` **Output.** :math:`\tilde{g}(t)` #. Compute :math:`\left ( \Delta \ast \psi_h^N \right )[n]` with smoothing kernel :math:`\psi^N(t) := B^N \left ( \frac{N}{2} t \right ) / \mathsf{max} \left ( B^N (t) \right )`, where :math:`B^N` is a B-spline of order :math:`N`. #. Compute :math:`\mathscr{M}_\lambda\left ( ( \Delta q \ast \psi_h^N ) [n] \right ) - \left ( \Delta q \ast \psi_h^N \right )[n]` and retain non-zero points to get :math:`\Delta\tilde{\varepsilon}_g[n]`. #. Add :math:`0` at the beginning of :math:`\Delta\tilde{\varepsilon}_g[n]` and apply summation operator :math:`\tilde{\varepsilon}_g[n] = \sum_{k = 0}^{n} \Delta \tilde{\varepsilon}_g[k]`. #. Obtain multi-bit measurements :math:`\tilde{q}_{\mathsf{MB}}[n] = q[n] + \tilde{\varepsilon}_g[n]`. #. Low-pass filter :math:`q_{\mathsf{MB}}[n]` to obtain :math:`\tilde{g}(t)`. Code ++++ :download:`MATLAB Live Demo <_static/code/MSDQ.m>` .. _us_fp: Fourier-Prony __________________________________ Fourier-based recovery algorithm for bandlimited and periodic signals was proposed in :cite:p:`Bhandari:2022:Ja`. Acquisition +++++++++++ **Signal Model.** Consider :math:`g \in \mathcal{B}_\Omega` such that :math:`g(t) = g(t + \tau), \forall t \in {\mathbb R}`: .. math:: g(t) = \sum\nolimits_{\left | p \right | \leq P} \hat{g}[p] e^{\jmath p \omega_0 t},\quad\omega_0 = \frac{2 \pi}{\tau},\quad P = \frac{\Omega}{\omega_0} where :math:`\hat{g}[p]` denotes the Fourier series coefficients such that :math:`\hat{g}[-p] = \hat{g}^*[p]` with :math:`\sum\nolimits_{p}{\left | \hat{g}[p] \right |} ^2 < \infty`. Signal :math:`g(t)` is then inputted into modulo ADC, sampled with a period :math:`T` and giving :math:`y[n] = \mathscr{M}_{\lambda}(g(t))`. **Problem Formulation.** Given modulo samples :math:`y[n]`, number of folds :math:`K` and bandwidth :math:`\Omega`, reconstruct :math:`g[n]`. Reconstruction ++++++++++++++ First apply first-order differences to :math:`y[n]` to get .. math:: \underline{y}[n] = \Delta y[n] = y[n] - y[n - 1] which can be rewritten using *modular decomposition property* :cite:`Bhandari:2021:J` as .. math:: \underline{y}[n] = \underline{\gamma}[n] - \underline{r}[n] = \underline{\gamma}[n] - \sum\limits_{m \in \mathcal{M}} c[m] \delta (nT - t_m) where :math:`\gamma[n] = g(nT)`. For the recovery strategy, we leverage the outband information of the residue signal :math:`\underline{\varepsilon}[n]`. .. admonition:: Fourier Domain Partition Discrete Fourier Transform of :math:`y[n]` can be written as .. math:: \hat{\underline{y}}[p] = \begin{cases} \hat{\underline{\gamma}}[n] - \hat{\underline{r}}[p] & n \in ({\mathbb E}_{P,K-1}) + (K - 1){\mathbb Z},\\ - \hat{\underline{r}}[p] & p \in ({\mathbb I}_{K - 1} \ {\mathbb E}_{P,K - 1}) + (K - 1){\mathbb Z}. \end{cases} where .. math:: \hat{\underline{r}}[p] = \sum\limits_{m \in {\mathcal M}} c[m] \exp \left ({-\jmath \frac{\underline{\omega}_0 p t_m}{T}} \right ) = \sum\limits_{m \in {\mathcal M}} c[m] \exp \left ({-\jmath \frac{\underline{\omega}_0 p n_m}{N - 1}} \right ) This can be visually depicted by the following figure. .. image:: _static/img/fp-partitioning.png :alt: Fourier Domain Partitioning Unknown residue parameters :math:`\{ c_m, n_m \}_{m = 0}^{M-1}` can be estimated using *Prony's method* from out of the band part of the spectra. Let :math:`H(z) = \sum\nolimits_{n=0}^M h[n]z^{-n} = \prod_{m=0}^{M-1} \left ( 1 - u_m z^{-1} \right )` where :math:`u_m = \exp(-\jmath \frac{2 \pi n_m}{N - 1})`. Convolving out band part of spectra :math:`\hat{\underline{y}}[n]` with filter coefficients :math:`\mathbf{h}` annihilates the sequence for :math:`l = [K+P, N - P - 1]` such that .. math:: \left ( h \ast \hat{\underline{r}} \right )[l] = \sum\limits_{m = 0}^{M-1} c[m] u_m^{l - P - 1} \sum\limits_{k=0}^{M} h[k] u_m^{-k} = 0 After finding the roots :math:`u_m` unknown folding amplitudes can be found using least-squares method. Given residue parameters :math:`\{ c_m, n_m \}_{m=0}^{M-1}` then residue reconstruction :math:`\underline{\tilde{r}}{n}` can be obtain from the definition .. math:: \underline{\tilde{r}}[n] = \sum\limits_{m=0}^{M-1} c_m \delta[n - n_m]. Then append :math:`0` to the beginning of the sequence and apply anti-difference operator such that .. math:: \tilde{r}[n] = \sum\limits_{m=0}^n \underline{\tilde{r}}[m], k \in {\mathbb I}. Reconstruct :math:`\tilde{g}[n]` via .. math:: \tilde{g}[n] = y[n] + \tilde{r}[n]. **Output.** Signal reconstruction :math:`\tilde{g}[n]`. Code ++++ :download:`MATLAB Live Demo <_static/code/demo_FourierProny.mlx>` `Online Python Playground FP `_ Time Encoding via Unlimited Sensing ___________________________________ Description of Time Encoding Sub-Nyquist Frequency Estimation ________________________________ Sub-Nyquist Frequency estimation via USF and the effect of quantization was studied in :cite:p:`Pavlicek:2024:C`. Two multi-channel architectures with modulo acquisition were proposed which enabled the capture of signals :math:`72` times exceeding ADC's range while achieving :math:`30000` times in Nyquist rate reduction. Spectral Estimation based on Gaussian Integers _______________________________________________ .. figure:: _static/img/complex_val_threshold_final.png :align: center :alt: MC-USF with complex-valued threshold. :name: mc-usf-cplx-val :width: 550 USF acquisition architecture with Gaussian integer thresholds that allows sub-Nyquist frequency estimation. **Signal Model.** Let the input signal :math:`g(t)` be a finite sum of :math:`K` complex exponentials .. math:: g(t) = \sum_{k=0}^{K - 1} c_k e^{\jmath \omega_k t}, \omega_k = 2 \pi f_k where signal parameters :math:`\{ f_k, c_k \}_{k=0}^{K-1}` are unknown. :math:`g(t)` is acquired using the multi-channel architecture depicted in :numref:`Fig. %s ` which implements thresholds based on Gaussian Integers. **Measurements.** Low dynamic range measurements captured by the designed architecture can be described as .. math:: \left [ \begin{array}{cc}v_0[n] & v_1[n] \\ v_2[n] & v_3[n]\end{array} \right ] = {\mathscr M} \left ( \left [ \begin{array}{c} g[n] \\ g_{{T_d}}[n] \end{array} \right ] \left [ \begin{array}{c} \cos \theta & \sin \theta \end{array} \right ] \right ). :label: ldr-GI-meas Measurements :math:`\{ v_l[n] \}_{l = 0}^{3}` are related to :math:`g[n]` and :math:`g_{T_d}[n]` via systems of congruence equations .. math:: \begin{cases} g[n] &= \epsilon ( \theta_0[n] + \jmath \theta_1[n] ) (p + \jmath q) + r_0[n] \\ g[n] &= \epsilon ( \psi_0[n] + \jmath \psi_1[n] ) (p - \jmath q) + r_1[n] \end{cases} and .. math:: \begin{cases} g_{T_d}[n] &= \epsilon ( \theta_2[n] + \jmath \theta_3[n] ) (p + \jmath q) + r_2[n] \\ g_{T_d}[n] &= \epsilon ( \psi_2[n] + \jmath \psi_3[n] ) (p - \jmath q) + r_3[n] \end{cases} where :math:`\{ \theta_l[n], \psi_1[n] \}_{l=0}^{3} \in \mathbb{Z}^2`, :math:`r_0[n] = r_1^*[n] = (v_0[n] + \jmath v_1[n]) e^{-\jmath \theta}` and :math:`r_2[n] = r_3^*[n] = (v_2[n] + \jmath v_3[n]) e^{-\jmath \theta}`. Problem Statement +++++++++++++++++ Given measurements :math:`\{ v_l[n] \}_{l=0}^{3}` captured by the architecture in :numref:`Fig. %s ` with a threshold :math:`2 \rho e^{-\jmath\theta} = \epsilon (p + \jmath q)`. Then, :math:`g(t)` can be exactly recovered with :math:`N_{l} \geqslant \left ( 2 - \lfloor \frac{l}{2} \rfloor \right ) K, l\in \mathbb{I}_{4}` samples if :math:`\mathsf{GCD}(p, q) = 1, T_d \leqslant \frac{\pi}{\mathsf{max}|\omega_k|}` and :math:`\left\| g(t) \right |_{\infty} < \epsilon \frac{p^2 + q^2}{2}`. .. admonition:: Recovery Algorithm **Input.** :math:`\{ v_l[n] \}_{l=0}^{3}, p, q, T_d` **Output.** Signal parameters :math:`\{ f_k, c_k \}_{k=0}^{K - 1}` #. Unfold :math:`g[n]` and :math:`g_{T_d}[n]` by solving congruence equations :math:`\epsilon \begin{bmatrix} p & -q & 0 & 0\\ q & p & 0 & 0\\0 & 0 & p & q \\ 0 & 0 & -q & p \\ \end{bmatrix} \begin{bmatrix} \theta_0\\ \theta_1\\ \phi_0\\ \phi_1 \end{bmatrix} + \begin{bmatrix}\mathsf{Re}(r_0[n])\\\mathsf{Im}(r_0[n])\\\mathsf{Re}(r_1[n])\\\mathsf{Im}(r_1[n])\end{bmatrix} = \begin{bmatrix}\mathsf{Re}(g[n])\\\mathsf{Im}(g[n])\\\mathsf{Re}(g[n])\\\mathsf{Im}(g[n])\end{bmatrix}` and :math:`\epsilon \begin{bmatrix} p & -q & 0 & 0\\ q & p & 0 & 0\\0 & 0 & p & q \\ 0 & 0 & -q & p \\ \end{bmatrix} \begin{bmatrix} \theta_0\\ \theta_1\\ \phi_0\\ \phi_1 \end{bmatrix} + \begin{bmatrix}\mathsf{Re}(r_0[n])\\\mathsf{Im}(r_0[n])\\\mathsf{Re}(r_1[n])\\\mathsf{Im}(r_1[n])\end{bmatrix} = \begin{bmatrix}\mathsf{Re}(g[n])\\\mathsf{Im}(g[n])\\\mathsf{Re}(g[n])\\\mathsf{Im}(g[n])\end{bmatrix}`. #. Use Prony's method to estimate folded frequencies :math:`f_k^\prime`. #. Use least squares to obtain estimates of :math:`c_k`. #. Use least squares to find the phase term :math:`e^{\jmath 2 \pi f_k T_d}`. Sub-Nyquist USF Spectral Estimation via RCRT ______________________________________________ .. figure:: _static/img/real_valued_threshold_final.png :align: center :alt: MC-USF with real-valued threshold. :name: mc-usf-real-val :width: 550 USF acquisition architecture with real-valued thresholds (:math:`\beta_0 = \epsilon c_0 / 2` and :math:`\beta_1 = \epsilon c_1 / 2`) that allows sub-Nyquist frequency estimation. **Measurements.**- Low dynamic range measurements captured by the designed architecture can be described as .. math:: \begin{cases} u_0[n] &= \mathscr{M}_{\beta_0}(g[n]) \quad u_1[n] = \mathscr{M}_{\beta_1}(g[n]) \\ u_2[n] &= \mathscr{M}_{\beta_0}(g[n]) \quad u_3[n] = \mathscr{M}_{\beta_1}(g[n]) \\ \end{cases} Given measurements :math:`\{ u_l[n] \}_{l=0}^{3}` captured by the architecture in :numref:`Fig. %s ` with thresholds :math:`\beta_0` and :math:`\beta_1`. Then, :math:`g(t)` can be exactly recovered with :math:`N_{i} \geqslant \left ( 2 - \lfloor \frac{i}{2} \rfloor \right ) K, i\in \mathbb{I}_{4}` samples provided that :math:`T_d \leqslant \frac{\pi}{\mathsf{max}_k |\omega_k|}` and :math:`\left\| g(t) \right |_{\infty} < \epsilon \frac{\mathsf{c_0, c_1}}{2}`. .. admonition:: Recovery Algorithm **Input.** :math:`\{ u_l[n] \}_{l=0}^{3}, \beta_0, \beta_1, T_d` **Output.** Signal parameters :math:`\{ f_k, c_k \}_{k=0}^{K - 1}` #. Unfold :math:`g[n]` and :math:`g_{T_d}[n]` by solving congruence equations :math:`g[n] = 2 \beta_0 a_0[n] + u_0[n], g[n] = 2 \beta_1 a_1[n] + u_1[n]` and :math:`g_{T_d}[n] = 2 \beta_0 a_2[n] + u_2[n], g_{T_d}[n] = 2 \beta_1 a_3[n] + u_3[n]` #. Use Prony's method to estimate folded frequencies :math:`f_k^\prime`. #. Use least squares to obtain estimates of :math:`c_k`. #. Use least squares to find the phase term :math:`e^{\jmath 2 \pi f_k T_d}`. Example Code ++++++++++++ :download:`MATLAB Live Demo <_static/code/demo_FourierProny.mlx>` .. _Iter_SiS: ITER-SiS __________________________________ **Iter-SiS** (Iterative Signal Sieving) is a robust recovery method that operates in the canonical domain :cite:p:`Guo:2023:C`. Its core idea is to separate the distinct signal components within the folded samples using an alternating minimization strategy. Problem Statement +++++++++++++++++ Let :math:`g \in \mathcal{B}_\Omega` be a bandlimited signal. In practice, the noisy, folded samples are given by .. math:: y_{w}\left[n\right] = y[n] + w\left[n\right],\quad y[n] = \mathscr{M}_{\lambda}\left(g\left(nT\right)\right), \quad n = 0,\cdots,N-1 where :math:`T` denotes the sampling period. :math:`w` is the measurement noise characterized by :math:`\frac{1}{N}\sum\nolimits_{n=0}^{N-1} \left| w\left[n\right] \right|^{2} = \sigma^{2}`. Here, we specify the noise power rather than assuming a specific stochastic noise distrubution. Our goal is to reconstruct :math:`\{g[n]\}_{n=0}^{N-1}` from :math:`\{y_{w}[n]\}_{n=0}^{N-1}`. Recovery Method +++++++++++++++ Let :math:`\mathbb{I}_{N}=\{0,\cdots, N-1\}, N\in \mathbb{Z}^{+}`. From the modular decomposition property, we know that .. math:: y[n] = g[n] - \varepsilon_{g}[n] \equiv g[n] - \sum\nolimits_{m=0}^{M-1} c_m u[n - n_{m}] :label: decom where :math:`u[\cdot]` denotes the discrete unit step function. :math:`c_m\in 2\lambda\mathbb{Z}` and :math:`n_{m} \in \mathbb{I}_{N}` represent the folding threshold and instant, respectively. Denote by :math:'\underline{y}[n]=y[n+1]-y[n]' the finite difference of :math:`y[n]`. Then, taking the finite difference of :math:numref:`decom` leads to, .. math:: \underbrace{\underline{y}[n]}_{\mathsf{Modulo \; Samples}} = \underbrace{\underline{g}[n]\in\mathcal{B}_{\Omega}}_{\mathsf{Bandlimited \; Sieve}} - \quad\underbrace{\underline{\varepsilon}_{g}[n]\in2\lambda \mathbb{Z}}_{\mathsf{Sparse \; Sieve}}, :label: signal_model where :math:`\underline{\varepsilon}_{g}[n] = \sum_{m=0}^{M-1} c_m \delta[n - n_{m}]` and :math:`\delta[\cdot]` is the discrete delta function. Hence, the measurements :math:`y[n]` can be decomposed as two constituent components: (1) bandlimited signal :math:`\underline{g}[n]` and (2) sparse signal :math:`\underline{\varepsilon}_{g}[n]`. The distinction in signal structures facilitates the separation of the residue :math:`\underline{\varepsilon}_{g}[n]`, leading to the concept of a *signal sieving strategy*. **Energy Concentration.** Within the finite observation window, :math:`\underline{g}[n]` is characterized by enforcing energy concentration in the low-pass region, which is implemented as .. math:: \underline{g}[n] \approxeq \underline{g}_{\Omega}[n] = (\underline{g}*\varphi_{\Omega}) [n] \quad \quad \mathsf{\small (Bandlimited \;\; Projection)}. :label: BP **Sparse Characterization on the Continuum.** To achieve accurate spike estimation, we utilize a continuous-time, parametric representation of the residue, .. math:: \underline{\varepsilon}_{g}[n] = \sum\nolimits_{m=0}^{M-1} \frac{\breve{c}_{m}}{1 - z_m^{-1} \xi_{N-1}^{n}} \equiv \sum\nolimits_{m=0}^{M-1} \frac{\mathsf{P}(\xi_{N-1}^{n})}{\mathsf{Q}(\xi_{N-1}^{n})} :label: res where ━━● :math:`\breve{c}_{m} = (1 - e^{-\jmath 2\pi n_{m}}c_m/(N - 1) ` and :math:`z_m = e^{\jmath \frac{2\pi n_{m}}{N - 1}}`. ━━● :math:`\mathsf{P}(\xi_{N-1}^{n})` and :math:`\mathsf{Q}(\xi_{N-1}^{n})` are trignometric polynomials of degree :math:`M-1` and :math:`M`, respectively with :math:`\xi_{N-1}^{n} = e^{\jmath \frac{2\pi n}{N - 1}}`. Hence, in the presence of noise, the signal recovery problem can be formulated as, .. math:: {\min}\limits_{\{\mathsf{P},\; \mathsf{Q},\; \underline{g}_{\Omega} \}} \quad \sum\nolimits_{n = 0}^{N - 2} {{{\left| { \underline{g}_{\Omega}[n] - \underline{y}_{w}\left[n\right] - \frac{\mathsf{P}(\xi_{N-1}^{n})}{\mathsf{Q}(\xi_{N-1}^{n})} }\right|}^2}},\quad \mbox{s.t.} \quad \begin{array}{lll} \underline{g}_{\Omega} = \underline{g}_{\Omega} * \varphi_{\Omega}\\ c_m\in 2\lambda\mathbb{Z} \end{array}. :label: optimization Alternating Minimization Strategy +++++++++++++++++++++++++++++++++ We decouple :math:numref:`optimization` into two tractable sub-problems: **Sub-Problem P1.** Asssuming :math:`\underline{g}_{\Omega}` is known, then :math:numref:`optimization` reduces to .. math:: {\min}\limits_{\{\mathsf{P},\; \mathsf{Q} \}} \quad \sum\nolimits_{n = 0}^{N - 2} {{{\left| { \widetilde{\underline{\varepsilon}}_{g}[n] - \frac{\mathsf{P}(\xi_{N-1}^{n})}{\mathsf{Q}(\xi_{N-1}^{n})} }\right|}^2}}, \quad \mbox{where}\quad \widetilde{\underline{\varepsilon}}_{g}[n] = 2\lambda \left\lfloor ({\underline{g}_{\Omega}[n] - \underline{y}_{w}\left[n\right] + \lambda})/({2\lambda}) \right\rfloor. :label: spest where :math:`\left\lfloor \cdot \right\rfloor` is the floor function. To solve :math:numref:`spest`, we adopt an iterative strategy, namely we construct a collection of estiamtes for :math:`\mathsf{Q}` and select the one that minimizes the mean-square error (MSE) in :math:numref:`spest`. These estimates are found iteratively by solving the minimization below (assuming :math:`\mathsf{Q}^{[j]} \approx \mathsf{Q}`) .. math:: \{\mathsf{P}^{[j+1]}, \mathsf{Q}^{[j+1]}\} = {\min}\limits_{\{\mathsf{P},\; \mathsf{Q} \}} \quad \sum\nolimits_{n = 0}^{N - 2} {{{\left| { \frac{\widetilde{\underline{\varepsilon}}_{g}[n] \mathsf{Q}(\xi_{N-1}^{n}) - \mathsf{P}(\xi_{N-1}^{n})}{ \mathsf{Q}^{[j]} (\xi_{N-1}^{n})} }\right|}^2}}, \quad j\in \mathbb{I}_{j_{\max}}. :label: spike Once :math:`\{\mathsf{P}, \mathsf{Q}\}` is known, the residue :math:`{\underline{\varepsilon}}_{g}[n]` can be recovered via :math:numref:`res`. **Sub-Problem P2.** With :math:`{\underline{\varepsilon}}_{g}[n]` known, :math:numref:`optimization` boils down to .. math:: {\min}\limits_{\{\underline{g}_{\Omega} \}} \quad \sum\nolimits_{n = 0}^{N - 2} {{{\left| { \widetilde{\underline{g}}_{\Omega}[n] - \underline{g}_{\Omega} }\right|}^2}}, \quad \mbox{where} \quad \widetilde{\underline{g}}_{\Omega}[n] = \underline{y}_{w}\left[n\right] + {\underline{\varepsilon}}_{g}[n], \quad \underline{g}_{\Omega} = \underline{g}_{\Omega} * \varphi_{\Omega} which is a convex problem with closed-form solution given by .. math:: \underline{g}_{\Omega} = \widetilde{\underline{g}}_{\Omega}[n] * \varphi_{\Omega}. :label: signal **Stopping Criterion.** We update the estimates :math:`\{{\underline{\varepsilon}}_{g}^{[i]}[n], \underline{g}_{\Omega}^{[i]}\}, i\in\mathbb{I}_{i_{\max}}` through iterating between :math:numref:`spike` and :math:numref:`signal`, until the following criterion is satisfied .. math:: \frac{1}{N}\sum\nolimits_{n = 0}^{N - 1} {{{\left| { {{g}}_{\Omega}^{[i]}[n] - \varepsilon_{g}^{[i]}[n] - y_{w}[n] }\right|}^2}} \leqslant \sigma^{2}, \;\; \mbox{where} \;\; \{ {{g}}_{\Omega}^{[i]}, \; \varepsilon_{g}^{[i]} \} = \Delta^{-1} \{ \underline{g}_{\Omega}^{[i]},\; \underline{\varepsilon}_{g}^{[i]} \}. :label: sc This shows that the signal estimates match the original folded measurements within the given tolerance level :math:`\sigma`. The figure below describes the visual output of the signal sieving process. .. figure:: _static/img/Algorithm_v1.png :width: 60% :align: center Flowchart of the iterative signal sieving. The procedure is summarized below: .. admonition:: Iterative Signal Sieving Algorithm **Input.** :math:`\{y_{w}[n]\}_{n=0}^{N-1}, \lambda` and :math:`\Omega`. **Initialization.** :math:`\varepsilon_{g}^{[0]} \leftarrow y_{w}`. #. Compute :math:`\{ \underline{y}_{w},\underline{\varepsilon}_{g}\}`. #. For :math:`i=1,\cdots,i_{\max}` #. Update :math:`{\underline{\varepsilon}}_{g}^{[i]}[n]` via :math:numref:`spike`. #. Update :math:`\underline{g}_{\Omega}^{[i]}` via :math:numref:`signal`. #. If :math:numref:`sc` is satisfied, terminate all loops; else, return to Step 3. **Output.** Recovered signal :math:`\{g[n]\}_{n=0}^{N-1}`. Example Code ++++++++++++ `MATLAB Live Demo <_static/html/demo.html>`_ :download:`Down the MATLAB Code for IterSiS <_static/code/IterSiS.zip>`