Date of Award
Spring 1-1-2025
Document Type
Dissertation
Degree Name
Doctor of Philosophy (PhD)
Department
Statistics and Data Science
First Advisor
Zhou, Harrison
Abstract
Generative models—statistical and machine learning frameworks capable of producing new data samples—have emerged as powerful tools for modern artificial intelligence. This dissertation explores the theoretical underpinnings of generative modeling and examines their real-world impact across various domains. The work begins by delving into the mathematical foundations of probability distributions and latent variable methods, emphasizing concepts such as maximum likelihood estimation, variational inference, and adversarial training. Building on these core principles, it presents a unified perspective on popular architectures, including Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and score-based diffusion models. Empirical studies highlight how these models can be leveraged for practical applications in density estimation, data augmentation, and image synthesis. Furthermore, the research extends generative modeling to tackle challenging inverse problems, demonstrating how well-crafted architectures and optimization strategies can achieve state-of-the-art performance in tasks such as reconstruction, denoising, and signal recovery. By combining theoretical insights with real-world case studies, this work provides a comprehensive understanding of generative models, offering concrete guidance for researchers, practitioners, and policymakers seeking to harness their transformative potential.
Recommended Citation
Dou, Zehao, "Understanding Generative Models, From Theory to Applications" (2025). Yale Graduate School of Arts and Sciences Dissertations. 1745.
https://elischolar.library.yale.edu/gsas_dissertations/1745