CMU logo
Search
Expand Menu
Close Menu

HCII Thesis Proposal: Kimi Wenzel

Open in new window

When
-

Where
GHC 6115

Description

Date & Time: November 19, 3-5PM ET

Location: GHC 6115

Remote: Zoom link (Meeting ID: 912 4041 6318, Passcode: 946072)
 

Committee:

Geoff Kaufman (CMU)

Laura Dabbish (CMU)

Jeff Bigham (CMU/Apple)

Renee Shelby (Google)

Michael Mueller (IBM)

Abstract
Through both quantitative and qualitative studies, I document the risks of conversational AI breakdowns in both voice and text mediums.

In my first set of studies, I consider AI-enabled voice assistants, notably finding that identical AI breakdowns have disparate impacts across diverse users. In a comparison of Black and white users, I find that conversational breakdowns lead to deflated group self-esteem and an inflated sense of self consciousness for Black users, but not white users. A follow-up study on multicultural users revealed a taxonomy of six distinct categories of harm that emerge from AI voice assistant breakdowns. This set of studies showcases how identity may serve as a moderator when reacting to AI breakdowns.


In my proposed work, I consider how LLM refusals and breakdowns in a help-seeking context may lead to undue harm. I conduct this work across 5 different countries to further understand how cultural differences may impact reception of LLM refusals. This work has the potential to generalize to other morally-sensitive contexts, and has far-reaching implications given recent trends of LLM-use and help-seeking. 

Thesis Proposal PDF