There is a long history of algorithmic development for solving inverse problems arising in sensing and imaging systems and beyond. Examples include medical and computational imaging, compressive sensing, as well as community detection in networks. Until recently, most algorithms for solving inverse problems in the imaging and network sciences were based on static signal models derived from physics or intuition, such as wavelets or sparse representations.
Today, the best performing approaches for the aforementioned image reconstruction and sensing problems are based on deep learning, which learn various elements of the method including i) signal representations, ii) stepsizes and parameters of iterative algorithms, iii) regularizers, and iv) entire inverse functions. For example, it has recently been shown that solving a variety of inverse problems by transforming an iterative, physics-based algorithm into a deep network whose parameters can be learned from training data, offers faster convergence and/or a better quality solution. Moreover, even with very little or no learning, deep neural networks enable superior performance for classical linear inverse problems such as denoising and compressive sensing. Motivated by those success stories, researchers are redesigning traditional imaging and sensing systems.
However, the field is mostly wide open with a range of theoretical and practical questions unanswered. In particular, deep-neural network based approaches often lack the guarantees of the traditional physics based methods, and while typically superior can make drastic reconstruction errors, such as fantasizing a tumor in an MRI reconstruction.
This workshop aims at bringing together theoreticians and practitioners in order to chart out recent advances and discuss new directions in deep neural network based approaches for solving inverse problems in the imaging and network sciences.
|Reinhard Heckel, Paul Hand, Alex Dimakis, Joan Bruna, Deanna Needell, Richard Baraniuk|
|The spiked matrix model with generative priors (Talk)|
|Robust One-Bit Recovery via ReLU Generative Networks: Improved Statistical Rate and Global Landscape Analysis (Talk)|
|Shuang Qiu, Xiaohan Wei, Zhuoran Yang|
|Coffee Break (Break)|
|Computational microscopy in scattering media (Talk)|
|Denoising via Early Stopping (Talk)|
|Neural Reparameterization Improves Structural Optimization (Talk)|
|Stephan Hoyer, Jascha Sohl-Dickstein, Sam Greydanus|
|Lunch Break (Break)|
|Learning-Based Low-Rank Approximations (Talk)|
|Blind Denoising, Self-Supervision, and Implicit Inverse Problems (Talk)|
|Learning Regularizers from Data (Talk)|
|Jonathan Scarlett, Piotr Indyk, Ali Vakilian, Adrian Weller, Partha Mitra, Benjamin Aubin, Bruno Loureiro, Florent Krzakala, Lenka Zdeborová, Kristina Monakhova, Joshua Yurtsever, Laura Waller, Hendrik Sommerhoff, Michael Moeller, Rushil Anirudh, Shuang Qiu, Xiaohan Wei, Zhuoran Yang, Jayaraman J. Thiagarajan, Salman Asif, Michael Gillhofer, Johannes Brandstetter, Sepp Hochreiter, Felix Petersen, Dhruv Patel, Assad Oberai, Akshay Kamath, Sushrut Karmalkar, Eric Price, Ali Ahmed, Zahra Kadkhodaie, Sreyas Mohan, Eero Simoncelli, Carlos Fernandez-Granda, Oscar Leong, Wesam Sakla, Rebecca Willett, Stephan Hoyer, Jascha Sohl-Dickstein, Sam Greydanus, Gauri Jagatap, Chinmay Hegde, Michael Kellman, Jon Tamir, Numan Laanait, Ousmane Dia, Mirco Ravanelli, Jonathan Binas, Negar Rostamzadeh, Shirin Jalali, Tiantian Fang, Alex Schwing, Sébastien Lachapelle, Philippe Brouillard, Tristan Deleu, Simon Lacoste-Julien, Stella Yu, Arya Mazumdar, Ankit Singh Rawat, Yue Zhao, Jianshu Chen, Rebecca Li, Hubert Ramsauer, Gabrio Rizzuti, Nikolaos Mitsakos, Dingzhou Cao, Thomas Strohmer, Yang Li, Pei Peng, Greg Ongie|