# My Thesis Defense

* Update*: The defense is over now, and was a success! Thanks to everyone who showed up, and especially big thanks to my dissertation committee!

~

I will be defending my thesis on April 16th, 2024 at 9:30am in Room 1201 of 370 Jay Street. The defense if open to the public (*email me for access to the building to attend in-person*). The defense will also by broadcast over zoom, at this link.

**Add my defense to your calendar with this link.**

You can find slides for my talk here.

The defense will cover my work on fast randomized algorithms for numerical linear algebra, namely on four of my papers covering the following four topics:

Everyone is welcome to attend. Though, keep in mind that the presentation will be technical in nature, and will assume the audience is at least somewhat comfortable with both linear algebra and randomized algorithms.

The formal title and abstract of the defense is below.

$\phantom{.}$## Towards Optimal Matrix-Vector Complexity of Numerical Linear Algebra

The world is full of important problems with linear structure. Many machine learning, data science, and scientific computing applications require subroutines that solve numerical linear algebra problems. By solving numerical linear algebra problems more quickly, we can scale up the models and datasets used in these applications, enabling bigger and better scientific computations.

In practice, most fast linear algebra algorithms interact with their input matrix only by computing matrix-vector products, so it is crucial to understand how many of these products are necessary and sufficient to solve numerical linear algebra problems. Motivated by this, we will explore the Matrix-Vector Complexity of such algorithms.

This thesis will cover recent progress in four fundamental problems in numerical linear algebra: trace estimation, Kronecker trace estimation, low-rank approximation, and low-rank approximation of matrix functions. We will examine fast and pragmatic randomized algorithms that solve these problems with low matrix-vector complexity. We then complement most of these algorithms with information-theoretic lower bounds on the number of matrix-vector products needed.