# Raphael Arkady Meyer

I am a final year Ph.D. Student at NYU Tandon School of Engineering, advised by Christopher Musco and part of the Algorithms and Foundations Group.

I research problems in mathematical computing from the perspective of theoretical computer science.

In the summer of 2022, I visited Michael Kapralov's group at EPFL and Haim Avron's group at TAU.

Links: Google Scholar, dblp, Github, Zoom Room

My recent publications have looked at:

Fast Numerical Linear Algebra (

*preprint*,*preprint*,*SODA2024*)Active Learning on Linear Function Families (

*SODA2023*,*NeurIPS2020*)

Of course, I am interested in problems beyond these areas, and if you want to work with me on a problem, send me an email: $ram900@nyu.edu$

# News

**I'm defending my thesis soon, on April 16th!**It's open to the public, in-person and online. See the details here: link.

January 2024

I presented my work on Krylov methods at SODA 2024.

November 2023

New preprint on arXiv:

*Algorithm-Agnostic Low-Rank Approximation of Operator Monotone Matrix Functions*.I gave a talk on Krylov methods at the Conference on Fast Direct Solvers at Purdue University in November.

I gave a talk at UChicago on Trace Estimation and Kronecker-Trace Estimation on November 1st.

October 2023

Paper accepted at SODA 2024:

*On the Unreasonable Effectiveness of Single Vector Krylov Methods for Low-Rank Approximation*!I organized a minisymposium on

*The Matrix-Vector Complexity of Linear Algebra*at the first ever SIAM-NNP conference!Shyam Narayanan, Diana Halikias, William Swartworth, Tyler Chen, and myself were presenting at 8:30am on Sunday. What a stacked lineup!

**See the details here: link.**

September 2023

New preprint on arXiv:

*Hutchinson’s Estimator is Bad at Kronecker-Trace-Estimation*.

May 2023

New preprint on arXiv:

*On the Unreasonable Effectiveness of Single Vector Krylov Methods for Low-Rank Approximation*.

March 2023

I gave two talks at the NYU / UMass Quantum Linear Algebra reading group.

I gave a talk at the BIRS Perspectives on Matrix Computations about my

*new work on Krylov methods*.

January 2023

I presented

*Near-Linear Sample Complexity for $L_p$ Polynomial Regression*at SODA 2023.

November 2022

I gave a talk at the TCS Seminar at Purdue in early November to present my new research on the role of block size in Krylov Methods.

October 2022

New paper accepted at SODA 2023:

*Near-Linear Sample Complexity for $L_p$ Polynomial Regression*! I just gave a talk on it last week Friday at the Grad Student Seminar at CDS (at NYU).

September 2022

I gave a talk at GAMM ANLA on the role of block size in Krylov Methods for low-rank approximation. A preprint will be available very soon, but until then you can check out my slides for a preview! Slides

July 2022

I gave a talk at the

*SIAM Annual Meeting Minisymposium on Matrix Functions, Operator Functions, and Related Approximation Methods*. Thanks to Heather, Andrew, and Ke for organizing!

June 2022

I'm going be presenting Hutch++ this summer at HALG2022, with both a short talk and a poster.

I'm traveling this summer! I'm first in London for HALG2022. Then I'm spending June visiting Haim Avron at TAU, and July visiting Michael Kapralov at EPFL. If you're in the same place at the same time, drop me a line!

May 2022

I recently organized a mini-conference for NYU CS Theory researchers to present their "Pandemic Papers" in-person. Thanks to everyone who showed up and made it a success!

*More details here*I'm honored to be awarded the

**Deborah Rosenthal, MD Award for Best Quals Examination**in 2022, for my presentation*Towards Optimal Spectral Sum Estimation in the Matrix-Vector Oracle Model*.

April 2022

I'm honored to be a ICLR 2022 Highlighted Reviewer.

# Publications

in submission**Algorithm-Agnostic Low-Rank Approximation of Operator Monotone Matrix Functions***with David Persson and Christopher Musco*

in submission**Hutchinson's Estimator is Bad at Kronecker-Trace-Estimation**^{[1]}*with Haim Avron*

at SODA 2024**On the Unreasonable Effectiveness of Single Vector Krylov Methods for Low-Rank Approximation**^{[2]}*with Cameron Musco and Christopher Musco*

at SODA 2023**Near-Linear Sample Complexity for $L_p$ Polynomial Regression**^{[3]}*with Cameron Musco, Christopher Musco, David P. Woodruff, and Samson Zhou*

at ICLR 2022**Fast Regression for Structured Inputs**^{[4]}*with Cameron Musco, Christopher Musco, David P. Woodruff, and Samson Zhou*

at SOSA 2021**Hutch++: Optimal Stochastic Trace Estimation**^{[5]}*with Cameron Musco, Christopher Musco, and David P. Woodruff*

at NeurIPS 2020**The Statistical Cost of Robust Kernel Hyperparameter Tuning**^{[6]}*with Christopher Musco***Optimality Implies Kernel Sum Classifiers are Statistically Efficient**^{[7]}*with Jean Honorio***Characterizing Optimal Security and Round-Complexity for Secure OR Evaluation***with Amisha Jhanji and Hemanta K. Maji*

[1] | Slides |

[2] | Code available on github $\cdot$ Slides using TCS language $\cdot$ Slides using Applied Math language |

[3] | Slides |

[4] | Poster |

[5] | Code available on github $\cdot$ Landscape Poster $\cdot$ Portrait Poster $\cdot$ 4min Slides $\cdot$ 12min Slides $\cdot$ 25min Slides $\cdot$ 35min Slides $\cdot$ 1hr Slides |

[6] | Slides |

[7] | Poster $\cdot$ Slides. |

# Talks & Presentations

To date, I have presented every paper I published at the associated conference. This is a list of other talks or presentations I have given.

at**Optimal Trace Estimation and Sub-Optimal Kronecker-Trace Estimation***U Chicago Theory Lunch*.

at**On the Unreasonable Effectiveness of Single Vector Krylov for Low-Rank Approximation***BIRS workshop on Perspectives on Matrix Computations*.

at**On the Unreasonable Effectiveness of Single Vector Krylov for Low-Rank Approximation***Purdue University TCS Seminar*

at**Near-Linear Sample Complexity for $L_p$ Polynomial Regression***NYU CDS Student Seminar*

at**Hutch++ and More: Towards Optimal Spectral Sum Estimation***Matrix Functions, Operator Functions, and Related Approximation Methods*, a minisymposium at SIAM Annual Meeting (AN22)

at**Hutch++: Optimal Stochastic Trace Estimation***John Hopkins University Theory Seminar*

at**Lessons from Trace Estimation Lower Bounds: Testing, Communication, and Anti-Concentration**^{[8]}*Computational Lower Bounds in Numerical Linear Algebra*, a minisymposium at SIAM Annual Meeting (AN21)

[8] | Slides available here. Video starts at 1:04:55 here. |

Short Talk at Conference on Fast Direct Solvers**On the Unreasonable Effectiveness of Single Vector Krylov for Low-Rank Approximation**^{[9]}

Short Talk at**Hutchinson's Estimator is Bad at Kronecker-Trace-Estimation**^{[9]}*SIAM-NNP 2022*

Short Talk at**On the Unreasonable Effectiveness of Single Vector Krylov for Low-Rank Approximation**^{[9]}*GAMM ANLA 2022*.

Poster and Short Talk at**Hutch++: Optimal Stochastic Trace Estimation**^{[9]}*HALG 2022*.

Talk at**Chebyshev Sampling is Optimal for Lp Polynomial Regression**^{[9]}*NYU "Pandemic Presentations" 2022*

Poster at**Hutch++: Optimal Stochastic Trace Estimation**^{[9]}*Wald(O) 2021*.

Poster at**Optimality Implies Kernel Sum Classifiers are Statistically Efficient**^{[9]}*Midwest Theory Day 2019*

[9] | Assets available in the Publications section. |

45-min-long talk at NYU RAI Reading Group**Fairwashing SHAP (aka Interventional and Observational Shapley Values)**^{[10]}

1-hour-long talk at NYU/UMass Quantum Linear Algebra Reading Group**The Equivalence of Matrix-Vector Complexity in Quantum Computing, Part 2**

1-hour-long talk at NYU/UMass Quantum Linear Algebra Reading Group**The Equivalence of Matrix-Vector Complexity in Quantum Computing, Part 1**

1-hour-long talk at NYU VIDA RG Reading Group**Hutch++: Optimal Stochastic Trace Estimation**

1.5-hour-long talk at NYU Tandon Theory Reading Group**Introduction to Leverage Scores**

Two 1.5-hour-long talks at NYU Tandon Reinforcement Learning Reading Group**Strategies for Episodic Tabular & Linear MDPs**

Three 1.5-hour-long talks at NYU Tandon Theory Reading Group**Lagrangian Duality**

1-hour-long talk at NYU CDS Reading Group on Information Theory**Introduction to Differential Entropy**

1-hour-long presentation of the paper**Lower bounds on the complexity of stochastic convex optimization**^{[11]}*Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization*by Agarwal et. al.

[10] | Link to relevant paper here. My slides available here. |

[11] | Link to the original paper here. My slides available here. |

# Teaching

I really enjoy teaching, and have been a TA for a few courses now:

Responsible Data Science, New York University, Spring 2024

Algorithmic Machine Learning and Data Science, New York University, Fall 2023

Responsible Data Science, New York University, Spring 2023

Algorithmic Machine Learning and Data Science, New York University, Fall 2020

Introduction to Machine Learning, New York University, Spring 2020

Introduction to the Analysis of Algorithms, Purdue University, Fall 2018

# Service

Service outside of reviewing:

Organizer for the Minisymposium "The Matrix-Vector Complexity of Linear Algebra" at SIAM-NNP 2023

Organizer for NYU TCS "Pandemic Presentations" Day

Organizer for NYU Tandon Theory Reading Group

Service as a reviewer:

ICALP 2024 External Reviewer

ICML 2024 Reviewer

IJCAI 2024 Reviewer

ICLR 2024 Reviewer

NeurIPS 2023 Reviewer

TMLR 2023 Reviewer

ICLR 2023 Reviewer

SODA 2023 External Reviewer

NeurIPS 2022 Reviewer

ICML 2022 Reviewer

STOC 2022 External Reviewer

ICLR 2022 Reviewer*

NeurIPS 2021 Reviewer*

ISIT 2017 External Reviewer

** Denotes Highlighted / Outstanding Reviewer*