Research

Current research

My recent work centers on randomized numerical methods. The primary goal is to design practical algorithms that can advance machine learning and scientific computing. We have applied these techniques to model problems in a wide range of fields, including image processing, quantum information science, dynamical systems, and scientific simulation. In support of this program, I have also developed foundational new tools in probability and convex geometry.

For comprehensive research information, see the publications page. See videos of research talks and short courses that have been uploaded to YouTube.

Sketchy SVD finds El Niño phenomenon

Research Overview

I started my research career working on sparse approximation, compressed sensing, and frame theory. My early papers on orthogonal matching pursuit (OMP), compressive sampling matching pursuit (CoSaMP), 1-minimization methods, sub-Nyquist sampling, and frame design have been influential.

At Caltech, I initiated a new research program on randomized algorithms for linear algebra and optimization. Our randomized SVD algorithms are extensively used for machine learning and scientific computing applications. [Inset: Result from a large-scale SVD computation.] More recently, we introduced a new class of fast randomized algorithms that can solve linear systems and eigenvalue problems faster than classical algorithms. We have also designed randomized algorithms that can handle large-scale semidefinite optimization problems that were previously intractable.

To address technical challenges in the design and analysis of randomized algorithms, I started a line of research on high-dimensional probability. My work resulted in a comprehensive theory of matrix concentration that has become textbook material in statistics, machine learning, numerical analysis, and theoretical computer science. We also identified new phase transition phenomena that occur in convex optimization problems with random data.

I am continuing to work on randomized numerical methods and random matrix theory. We are also investigating new applications in quantum information science and quantum chemistry that depend on scalable algorithms and matrix concentration tools.

EHT 2025: Simulated reconstruction

Randomized matrix and tensor computations

Numerical linear algebra remains an essential primitive for machine learning, scientific computing, and other fields. There are excellent classical algorithms for solving small and medium problem instances to high precision, but large-scale problems remain challenging. Over the last decade, new randomized algorithms have expanded the range of linear algebra problems that we can solve quickly and reliably. These advances have remarkable implications for science and engineering, including proposed methods for black hole imaging [inset, courtesy of Aviad Levis].

I have made fundamental algorithmic contributions to the field of randomized numerical linear algebra. I have also written several influential surveys that have brought these techniques to a wide audience.

In particular, Gunnar Martinsson and I played a leading role in developing and analyzing randomized SVD algorithms. These methods are widely used in practice. They also serve as subroutines for solving other problems, including preconditioning linear systems, forecasting dynamical systems, and scalable semidefinite programming.

Recently, Yuji Nakatsukasa and I have designed a new class of fast randomized algorithms for solving nonsymmetric linear systems and eigenvalue problems. Our methods are potentially more efficient than classical algorithms, and we are actively investigating the scope of application.

Collaborators

Katie Bouman, Volkan Cevher, Yifan Chen, Ethan Epperly, Olivier Fercoq, Zach Frangella, Charles Gammie, Dimitrios Giannakis, Yang Guo, Nathan Halko, Amelia Henriksen, Daeyoung Lee, Aviad Levis, Charlene Luo, Gunnar Martinsson, Beverley McKeon, Yuji Nakatsukasa, Deanna Needell, Clément Probel, Rashad Moarref, Ati Sharma, Yiming Sun, Madeleine Udell, Rachel Ward, Rob Webber, and Alp Yurtsever

Surveys

papers

Applications

Presentations

Fourier ptychographic imaging via scalable SDP

Scalable semidefinite programming

Semidefinite programming problems (SDPs) require optimization over a matrix variable that is constrained to be positive semidefinite. Examples of SDPs include relaxations of combinatorial problems, such as maxcut, and signal processing problems, such as phase retrieval. These optimization problems have remarkable potential for data science applications, but practitioners often raise the critique that it is hard to solve SDPs at the scale required for real-world instances.

Volkan Cevher, Madeleine Udell, Alp Yurtsever, and I developed the first practical algorithms for provably solving large SDPs. [Inset: Fourier ptychographic imaging via scalable SDP.] The key idea is to rigorously control storage costs. Our approach marries primal-dual optimization algorithms with tools from randomized matrix computations. On a laptop equivalent (in 2020), our prototype implementation can already solve SDP instances with over 1014 real variables.

Collaborators

Volkan Cevher, Lijun Ding, Olivier Fercoq, Madeleine Udell, and Alp Yurtsever

papers

Presentations

Matrix concentration

Structured random matrices arise in a wide variety of fields, including machine learning, statistics, numerical analysis, and theoretical computer science. To understand the behavior of these random matrices, it is critical to have theoretical tools that are easy to use, flexible, and accurate. One such class of results, called matrix concentration inequalities, allows us to control the spectral-norm distance of a random matrix from its expectation using simple summary parameters. These methods have made a huge impact in applications. Among other things, matrix concentration tools are used to justify fast randomized algorithms for the solution of Laplacian systems.

Motivated by my own work on randomized algorithms, I wrote a sequence of papers that develops a comprehensive theory of matrix concentration that parallels the classical theory of scalar concentration. These results can handle several powerful random matrix models, including sums of independent random matrices, matrix martingales, and matrix-valued functions with independent arguments. These tools have become textbook material.

Classical matrix concentration results are sharp for many examples, but they are suboptimal for some important models. In a breakthrough paper, I obtained the first general-purpose matrix concentration inequalities that can yield optimal results for classical random matrix ensembles. The tools from this paper serve as the foundation for recent advances.

Recently, I have been pursuing applications of matrix concentration in quantum information science. This approach yields near-optimal bounds for quantum state tomography and for randomized approximation of Hamiltonians.

Collaborators

Chi-Fang (Anthony) Chen, Yuhua (Richard) Chen, Brendan Farrell, Alex Gittens, Madalin Guţă, De Huang, Hsin-Yuan (Robert) Huang, Michael Jordan, Jonas Kahn, Richard Kueng, Lester Mackey, Jon Niles-Weed, Daniel Paulin, and Rachel Ward

Surveys

papers

applications

Presentations

Phase Transitions

Phase transition

A phase transition is a sharp change in the character of a computational problem as its parameters vary. In the 2000s, a massive body of empirical work suggested that many statistical optimization problems exhibit a phase transition when the amount of data increases. Examples include ℓ1 minimization methods for sparse signal recovery and Schatten 1-norm minimization methods for low-rank matrix recovery. When there is an inadequate amount of data, the statistical optimization method fails to complete the inference task. As soon as the amount of data is adequate, the method succeeds. As the amount of data increases further, the method becomes increasingly robust to noise. [Inset: Phase transition for decoding by linear programming.]

In 2012, my research group developed the first proofs that phase transitions are ubiquitous in statistical optimization problems. Our primary argument is based on tools from conic integral geometry and our discovery that conic intrinsic volumes concentrate. We also identified a versatile argument that combines duality and comparison theorems for Gaussian processes; this approach was developed intensively by Babak Hassibi's group. Later, we proved that these phase transitions are universal over a class of data distributions.

Collaborators

Dennis Amelunxen, Stephen Becker, John Bruer, Volkan Cevher, Martin Lotz, Michael McCoy, Ivan Nourdin, Samet Oymak, and Giovanni Peccati

papers

applications

presentations

Sparse approximation and compressed sensing

Sparse approximation refers to the computational problem of finding a sparse approximate solution to a system of linear equations. Compressed sensing is a sparse approximation problem that arises when we try to recover a sparse signal from random measurements. A wider class of structured signal processing problems can be interpreted as sparse approximation with respect to a continuous dictionary; examples include superresolution and low-rank matrix approximation.

My dissertation research focused on algorithms for sparse approximation. I obtained the first proof that orthogonal matching pursuit (OMP) can be competitive with ℓ1 minimization methods, and I obtained the first proof that ℓ1 minimization is robust to noise. Later, I showed that random instances of sparse approximation problems are easy to solve.

I also worked on compressed sensing models, algorithms, and applications. I gave the first proof that OMP can solve compressed sensing problems. Deanna Needell and I designed the CoSaMP algorithm, which offers near-optimal guarantees for compressed sensing. With coauthors from Rice DSP, we introduced random filters, and we used them to develop compressive analog-to-digital converters.

Collaborators

Rich Baraniuk, Dror Baron, Jean-Loup Bourguignon, Marco Duarte, Anna Gilbert, Mihailo Jovanović, Jason Laska, Beverley McKeon, Rashad Moarref, Deanna Needell, Götz Pfander, Holger Rauhut, Justin Romberg, Ati Sharma, Martin Strauss, Roman Vershynin, Mike Wakin, and Steve Wright

Surveys

Selected papers

applications