Date of Award

Spring 1-1-2025

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Statistics and Data Science

First Advisor

Zhou, Harrison

Abstract

Generative models—statistical and machine learning frameworks capable of producing new data samples—have emerged as powerful tools for modern artificial intelligence. This dissertation explores the theoretical underpinnings of generative modeling and examines their real-world impact across various domains. The work begins by delving into the mathematical foundations of probability distributions and latent variable methods, emphasizing concepts such as maximum likelihood estimation, variational inference, and adversarial training. Building on these core principles, it presents a unified perspective on popular architectures, including Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and score-based diffusion models. Empirical studies highlight how these models can be leveraged for practical applications in density estimation, data augmentation, and image synthesis. Furthermore, the research extends generative modeling to tackle challenging inverse problems, demonstrating how well-crafted architectures and optimization strategies can achieve state-of-the-art performance in tasks such as reconstruction, denoising, and signal recovery. By combining theoretical insights with real-world case studies, this work provides a comprehensive understanding of generative models, offering concrete guidance for researchers, practitioners, and policymakers seeking to harness their transformative potential.

Share

COinS