CMU logo
Search
Expand Menu
Close Menu

HCII PhD Thesis Proposal: Katelyn Morrison

Open in new window

When
-

Description

Expanding the Design Space for AI Explanations in Human-AI Collaboration

Katelyn Morrison

HCII PhD Thesis Proposal

Date & Time: Tuesday, April 22, 2025 @ 2:15 p.m. ET

Location: Newell-Simon Hall (NSH) 3305

Remote: Zoom Link (Meeting ID: 967 8779 8801; Passcode: 792418)
 

Committee:

Adam Perer (chair), Carnegie Mellon University

Motahhare Eslami, Carnegie Mellon University

John Zimmerman, Carnegie Mellon University

Xiang ‘Anthony’ Chen, University of California Los Angeles

Abstract:

Explainable AI (XAI) methods aim to provide people with an understanding of why and how AI systems behave. Recognizing the potential benefit this can have on AI-assisted decision-making, researchers, media, and funding agencies have turned to XAI as a solution to help people navigate when to consult and rely on AI systems in high-stakes decision-making contexts. Interestingly, while some controlled studies show that XAI advances have led to improved human-AI decision-making, other studies have shown that XAI advances have resulted in negative outcomes. Human-centered AI researchers argue that these inconsistent findings stem from taking a “one-size-fits-all” approach, ignoring consumers’ diverse expertise and XAI needs. In contrast, I argue that it stems from the lack of user agency in steering explanation generation and overlooking the possibility of imperfect XAI in human-AI collaboration. Furthermore, with a continued focus on supporting the understanding of AI and calibrating reliance, we remain unaware of the true value that explanations can bring to people when using AI in decision-making contexts. 

To expose the diverse values that explanations can bring to people while collaborating with AI, this thesis expands the XAI design space by introducing a novel paradigm called Steerable Explanations. By combining appropriation-aware design with interactive interfaces, Steerable Explanations support users in steering the explanation generation and the purpose of the explanation toward their evolving and unique explanation goals. To demonstrate the need to expand the design space of AI explanations, Part I of this dissertation details how a human-centered approach to XAI still leads to inconclusive findings in human-AI decision-making, and Part II reveals how people use XAI in unexpected ways to compensate for their unmet explanation needs. This thesis then goes on to expand the XAI design space in Part III by first co-designing explanations that support user agency in the explanation generation process. The second half of part III, my proposed work, extends this by formalizing the design space of Steerable Explanations, supporting user agency during explanation generation and in shaping the purpose of the explanation. Through an exploratory study using a text-to-image model as an explanation generation method, I uncover the emerging uses of AI explanations in human-AI collaborations. 

Overall, this dissertation expands upon human-centered explainable AI by advancing how we design for and think about AI explanations during human-AI collaborations. By reframing AI explanations as interactive, imperfect AI tools instead of infallible, static outputs, this thesis moves toward uncovering new design spaces and interactions with AI explanations.

Draft Proposal Document

Best,

Katelyn