Date of Award
Spring 2024
Document Type
Dissertation
Degree Name
Doctor of Philosophy (PhD)
Department
Computer Science
First Advisor
Aspnes, James
Abstract
Modern machine learning systems and other data science settings are increasingly subject to various reliability constraints: these systems may be decentralized and involve multiple agents acting without coordination, may be limited in memory and other computational resources, or may be subject to societal constraints like differential privacy. This dissertation presents algorithmic advances for several learning and data analysis tasks under such constraints. In the first chapter, we present new families of algorithms for the multi-agent cooperative bandit problem in the randomized gossip communication model. Our algorithms are fully decentralized and require only constant local memory per agent, and we highlight a connection between their global evolution and a new class of (centralized) "zero-sum" multiplicative weights update methods. Using this perspective, we develop a general analysis framework to derive sublinear regret bounds for our algorithms in both stochastic and adversarial reward settings. In the second chapter, we consider a multi-agent games setting. We initiate the study of equilibrium computation in the population protocol model, where every agent maintains a local strategy that it updates following each random, pariwise interaction. We introduce simple local strategy-update algorithms for a class of repeated prisoner's dilemma games, and we prove their convergence to approximate equilibria in this setting. This convergence is established by relating the evolution of our local algorithms to a new class of high-dimensional Ehrenfest random walks, for which we derive exact characterizations of their stationary distributions and bounds on their mixing times. In the final chapter, we present new differentially private variants of the Maximal Information Coefficient (MIC), a statistic used to detect and measure correlations in large datasets. As computing MIC is difficult, several efficient approximations were recently introduced, but these variants rely on dataset-dependent calculations that we show result in undesirable accuracy under the natural application of the Laplace mechanism. We thus introduce the MICr statistic, a new approximation of MIC more compatible with privacy. We prove that MICr is a consistent estimator of MIC, and we introduce two differentially private versions of MICr that significantly outperform the natural, private adaptations of existing approximations.
Recommended Citation
Lazarsfeld, John, "Advances in Multi-Agent Learning and Private Statistical Estimation" (2024). Yale Graduate School of Arts and Sciences Dissertations. 1490.
https://elischolar.library.yale.edu/gsas_dissertations/1490