# Constant-Expansion Suffices for Compressed Sensing with Generative Priors

@article{Daskalakis2020ConstantExpansionSF, title={Constant-Expansion Suffices for Compressed Sensing with Generative Priors}, author={Constantinos Daskalakis and Dhruv Rohatgi and Manolis Zampetakis}, journal={ArXiv}, year={2020}, volume={abs/2006.04237} }

Generative neural networks have been empirically found very promising in providing effective structural priors for compressed sensing, since they can be trained to span low-dimensional data manifolds in high-dimensional signal spaces. Despite the non-convexity of the resulting optimization problem, it has also been shown theoretically that, for neural networks with random Gaussian weights, a signal in the range of the network can be efficiently, approximately recovered from a few noisy… Expand

#### Figures and Topics from this paper

#### 5 Citations

Robust compressed sensing of generative models

- Computer Science
- ArXiv
- 2020

This paper proposes an algorithm inspired by the Median-of-Means (MOM) that guarantees recovery for heavy-tailed data, even in the presence of outliers, and shows the predicted robustness. Expand

Compressive Phase Retrieval: Optimal Sample Complexity with Deep Generative Priors

- Computer Science, Mathematics
- ArXiv
- 2020

This work resolves the open problem in compressive phase retrieval and demonstrates that generative priors can lead to a fundamental advance by permitting optimal sample complexity by a tractable algorithm in this challenging nonlinear inverse problem. Expand

No Statistical-Computational Gap in Spiked Matrix Models with Generative Network Priors †

- Medicine, Computer Science
- Entropy
- 2021

A gradient descent method for minimizing a nonlinear least squares objective over the range of an expansive-Gaussian neural network and it is shown that it can recover in polynomial time an estimate of the underlying spike with a rate-optimal sample complexity and dependence on the noise level. Expand

Optimal Sample Complexity of Gradient Descent for Amplitude Flow via Non-Lipschitz Matrix Concentration

- Computer Science, Mathematics
- ArXiv
- 2020

It is proved that the discontinuous matrix-valued operator satisfies a uniform matrix concentration inequality when the measurement vectors are Gaussian as soon as $m = \Omega(n)$ with high probability, which guarantees local convergence for Gaussian measurements at optimal sample complexity. Expand

Globally Injective ReLU Networks

- Computer Science, Mathematics
- ArXiv
- 2020

It is proved that, under mild technical conditions, any Lipschitz map can be approximated by an injective neural network, establishing a theoretical basis for the study of nonlinear inverse and inference problems using neural networks. Expand

#### References

SHOWING 1-10 OF 23 REFERENCES

A Provably Convergent Scheme for Compressive Sensing Under Random Generative Priors

- Mathematics
- 2018

Deep generative modeling has led to new and state of the art approaches for enforcing structural priors in a variety of inverse problems. In contrast to priors given by sparsity, deep models can… Expand

Compressed Sensing using Generative Models

- Computer Science, Mathematics
- ICML
- 2017

This work shows how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all, and proves that, if G is L-Lipschitz, then roughly O(k log L) random Gaussian measurements suffice for an l2/l2 recovery guarantee. Expand

Robust One-Bit Recovery via ReLU Generative Networks: Improved Statistical Rates and Global Landscape Analysis

- Mathematics, Computer Science
- ArXiv
- 2019

A new framework for the onebit sensing problem where the sparsity is implicitly enforced via mapping a low dimensional representation x0 through a known n-layer ReLU generative network G : R → R is considered, which proposes to recover the target G(x0) via an unconstrained empirical risk minimization (ERM) problem under a much weaker sub-exponential measurement assumption. Expand

Nonasymptotic Guarantees for Low-Rank Matrix Recovery with Generative Priors

- Computer Science, Mathematics
- ArXiv
- 2020

This work provides a non-asymptotic analysis with optimal sample complexity, up to logarithmic factors, for low-rank matrix recovery under an expansive-Gaussian network prior, and establishes that generative priors have no computational-to-statistical gap for structured low- rank matrix recovery in the finite data, nonasymPTotic regime. Expand

Global Guarantees for Enforcing Deep Generative Priors by Empirical Risk

- Computer Science, Mathematics
- IEEE Transactions on Information Theory
- 2020

These results constitute the first theoretical guarantees which establish the favorable global geometry of these non-convex optimization problems, and they bridge the gap between the empirical success of enforcing deep generative priors and a rigorous understanding of non-linear inverse problems. Expand

Compressed Sensing with Deep Image Prior and Learned Regularization

- Computer Science, Mathematics
- ArXiv
- 2018

It is proved that single-layer DIP networks with constant fraction over-parameterization will perfectly fit any signal through gradient descent, despite being a non-convex problem, which provides justification for early stopping. Expand

Invertibility of Convolutional Generative Networks from Partial Measurements

- Computer Science
- NeurIPS
- 2018

It is rigorously proved that, under some mild technical assumptions, the input of a two-layer convolutional generative network can be deduced from the network output efficiently using simple gradient descent, implying that the mapping from the low- dimensional latent space to the high-dimensional image space is bijective. Expand

Deep Denoising: Rate-Optimal Recovery of Structured Signals with a Deep Prior

- Computer Science
- ArXiv
- 2018

This paper provides an iterative algorithm minimizing a non-convex loss that provably removes noise energy by a fraction $\sigma^2 k/n$ and demonstrates in numerical experiments that this denoising performance is, indeed, achieved by generative priors learned from data. Expand

Rate-optimal denoising with deep neural networks

- Computer Science, Engineering
- 2018

This paper considers the problem of denoising an image from additive Gaussian noise using the two generator based approaches, and state and analyze a simple gradient algorithm that minimizes a non-convex loss function, and provably reduces noise energy by a factor of O(k/n). Expand

Inverting Deep Generative models, One layer at a time

- Computer Science, Mathematics
- NeurIPS
- 2019

This paper shows that for the realizable case, single layer inversion can be performed exactly in polynomial time, by solving a linear program, and provides provable error bounds for different norms for reconstructing noisy observations. Expand