Date of Award

Spring 1-1-2025

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Statistics and Data Science

First Advisor

Zhou, Harrison

Abstract

This paper introduces the heated idea of overparametrization in the field of neural nets training into the statistical scenario, establishes the theoretical properties of the overparameterized Expectation-Maximization (EM) algorithm and its connections to deep learning and robust statistics. We establish a global convergence result for overparameterized EM in Gaussian mixture models (GMMs), demonstrating that overparameterization reshapes the optimization landscape and facilitates convergence. Inspired by its success in deep learning, we show that overparameterized EM mitigates initialization sensitivity while maintaining statistical consistency. Beyond EM, we introduce the Penalized Tangent Depth (PTD) estimator, a novel framework linking GAN training and depth-based robust estimation. PTD provides a unified approach to robust inference, offering computational efficiency and statistical robustness under contamination. Our findings highlight fundamental connections between adversarial learning and classical robustness theory. Several open problems remain, including a rigorous proof of unconditional EM convergence, statistical efficiency in high-dimensional settings, and other settings beyond the robustness. Future research may further explore the interplay between overparameterized EM, gradient-based learning, and adversarial robustness. Our results contribute to bridging classical statistical methods with modern machine learning theory, providing new insights into optimization, robustness, and inference.

Share

COinS