HCII PhD Thesis Proposal: Venkat Sivaraman
When
-
Description
Human-Centered AI for Expert Decision-Making
Venkat Sivaraman
PhD Thesis Proposal
CMU Human-Computer Interaction Institute
Date & Time: Monday, March 17th at 9:30am ET
Location: Gates & Hillman Centers (GHC) 6115
Zoom: https://cmu.zoom.us/j/
Meeting ID: 988 4654 5071
Passcode: 892774
Committee:
Adam Perer (chair), Carnegie Mellon University
Mayank Goel, Carnegie Mellon University
Haiyi Zhu, Carnegie Mellon University
Suchi Saria, Johns Hopkins University & Bayesian Health
Abstract:
AI-based decision support (ADS) systems hold the promise of helping trained experts make decisions in healthcare, social services, and other important disciplines. Although these tools often seem to achieve high accuracy by learning patterns in large datasets of historical decisions and outcomes, many ADS systems ultimately fail to support decision-makers. It is as yet unclear why these systems fall short of their potential when integrated into real-world workflows, and how we can design ADS that better fosters complementarity with expert users.
This thesis aims to reconcile the perspectives of domain experts with the challenges that data science teams face when building ADS. First, I investigate expert decision-makers' perspectives on real AI systems in three high-stakes domains: child maltreatment screening, regional opioid overdose risk, and sepsis treatment in critical care. These studies show that existing ADS systems often neglect experts' broader goals, values, and contextual knowledge by attempting to directly recommend decisions, leading to fundamental misalignments and barriers to adoption. To mitigate these misalignments, I then develop two systems that support data scientists in the challenging work of building ADS models, while enabling domain experts to critique how the models are built and how they might behave in practice. Finally, in my proposed work I plan to evaluate a range of ADS designs that move beyond conventional recommendations. Working in the sepsis domain that I have previously studied, I will conduct think-aloud sessions and a larger-scale survey study to understand how different forms of ADS influence clinicians' reasoning as well as the quality of their decisions.
My work contributes to the fields of human-AI interaction, data visualization, and AI in healthcare by highlighting the pitfalls and opportunities of designing ADS for experts. In these domains, the AI's predictive task is often just one part of a constellation of related goals; rather than calibrating their reliance on the AI, expert users are likely to adopt its insights in whatever way best meets those goals. Experts are also uniquely equipped to critique and help improve ADS designs as part of model building teams, as long as modeling choices and behaviors are made transparent to them. Finally, my proposed work broadens current evaluation strategies for ADS beyond metrics like trust and reliance, focusing instead on how AI can support or impede reasoning as one small part of a complex workflow. Together, these contributions provide a conceptual and empirical foundation for future ADS systems that live up to the promise of complementing experts.