# Research Statements and Other Writeups

## Research Statements

My current research statement is publicly available (4 pages).

My graduate school research statement is also publicly available (5 pages).

## Other Writeups

A summary of my research in fair machine learning is given in An Overview of My Work in Welfare-Centric Fair Machine Learning (6 pages). This piece broadly desribes the context of my work, and summarizes various projects and applications.

An older overview of my research is given in my thesis summary (4 pages). The piece is a non-mathematical, but still somewhat technical, overview of my dissertation.

A mathematical overview of my research for general audiences is given in this piece (5 pages). Here the focus is less on applications and implications, and more on intuition for the deeper mathematical connections between my various areas of study. Results are selected for elegance and simplicity, and the piece should be broadly accessible to all audiences with a basic grounding in probability and statistics.

## Research Overview

In my research, I strive to strike a delicate balance between theory and practice.
On the theory side, my work primarily lies in sample complexity analysis for machine learning algorithms, as well as time complexity analysis and probabilistic guarantees for efficient sampling-based routines in randomized algorithms and statistical data science [1] [2] [3] [4].
In addition to statistical analysis, much of my work deals with delicate computational questions, like how to optimally characterize and estimate the sample-complexity of various estimation tasks (with applications to oblivious algorithms, which achieve near-optimal performance while requiring limited *a priori* knowledge), as well as the development of fair-PAC learning, with the accompanying computational and statistical reductions between classes of learnable models.

On the practical side, much of my early work was led by the observation that modern methods in statistical learning theory (Rademacher averages and localized Rademacher averages) often yield vacuous or unsatisfying guarantees, so I strove to understand why, and to show sharper bounds, with particular emphasis on constant factors and performance in the *small sample setting*. From there, I have worked to apply statistical methods developed for these approaches to myriad practical settings, including statistical data science tasks, and the analysis of machine learning, and more recently, fairness sensitive machine learning algorithms.

By blurring the line between theory and practice, I have been able to adapt rigorous theoretical guarantees to novel settings. For example, my work on adversarial learning from weak supervision stemmed from a desire to apply statistical learning theory techniques in absentia of sufficient labeled data. Conversely, I have also been able to treat theoretical problems that previously seemed unmotivated or irrelevant; my work in fair machine learning led to the fair-PAC learning formalism, where power-means over per-group losses (rather than averages) are minimized. The motivation to optimize power-means derives purely from the economic theory of cardinal welfare, but the value of this learning concept only becomes apparent when one observes that many of the desirable (computational and statistical) properties of risk minimization directly translate to power-mean minimization.