Date of Award
Spring 2023
Document Type
Dissertation
Degree Name
Doctor of Philosophy (PhD)
Department
Statistics and Data Science
First Advisor
Celis, L. Elisa
Abstract
The development of trustworthy systems for applications of machine learning and artificial intelligence faces a variety of challenges. These challenges range from the investigation of methods to effectively detect algorithmic biases to methodological and practical hurdles encountered when incorporating notions of representation, equality, and domain expertise in automated decisions. Such questions make the task of building reliable automated decision-making frameworks quite complex; nevertheless, addressing them in a comprehensive manner is an important step toward building automated tools whose impact is equitable. This dissertation focuses on tackling such practical issues faced during the implementation of automated decision-making frameworks. It contributes to the growing literature on algorithmic fairness and human-computer interaction by suggesting methods to develop frameworks that account for algorithmic biases and that encourage stakeholder participation in a principled manner. I start with the problem of representation bias audit, i.e., determining how well a given data collection represents the underlying population demographics. For data collection from real-world sources, individual-level demographics are often unavailable, noisy, or restricted for automated usage. Employing user-specified representative examples, this dissertation proposes a cost-effective algorithm to approximate the representation disparity of any unlabeled data collection using the given examples. By eliciting examples from the users, this method incorporates the users' notions of diversity and informs them of the extent to which the given data collection under or over-represents socially-salient groups. User-defined representative examples are further used to improve the diversity of automatically-generated summaries for text and image data collections, ensuring that the generated summaries appropriately represent all relevant groups. The latter part of the dissertation studies the paradigm of human-in-the-loop deferral learning. In this setting, the decision-making framework is trained to either make an accurate prediction or defer to a domain expert in cases where the algorithm has low confidence in its inference. Our work proposes methods for training a deferral framework when multiple domain experts are available to assist with decision-making. Using appropriate statistical fairness mechanisms, the framework ensures that the final decisions maintain performance parity across demographic groups. By focusing on stakeholder participation, in the forms of user feedback incorporation or domain expert participation, this dissertation advances methods to build trustworthy decision-making systems which can be readily deployed in practice.
Recommended Citation
Keswani, Vijay, "Algorithmic Decision-making with Stakeholder Participation" (2023). Yale Graduate School of Arts and Sciences Dissertations. 964.
https://elischolar.library.yale.edu/gsas_dissertations/964