CMU logo
Search
Expand Menu
Close Menu

HCII PhD Thesis Proposal: Sara Kingsley

Open in new window

When
-

Description

Designing Evaluations of AI-Driven Systems for the Future of Work: 

A Risk Measurement Framework
 

Sara C. Kingsley

HCII PhD Thesis Proposal 

 

Date & Time: Tuesday, April 15th @ 12 pm ET

Location: NSH 3305 

Remote: https://cmu.zoom.us/j/94414844103?pwd=ZADTgfm2xbYn5DhpTrIME3A9PSLNtg.1 

(Meeting ID: 944 1484 4103; Passcode: 922775)

 

Committee:

Jeffrey P. Bigham (co-chair), Carnegie Mellon University

Beibei Li (co-chair), Carnegie Mellon University

Jason I. Hong, Carnegie Mellon University

Saiph Savage, Northeastern University 
 

Abstract:

This dissertation draws on the sociology of risk to explore how AI systems shape labor markets, especially through opaque and contested risks. It builds on sociological theories that frame risk as socially constructed, often hidden, and unevenly distributed, while advancing this framework by proposing methods to computationally detect and measure these risks. Specifically, this dissertation:

  • Examines how workers perceive, audit, and contest AI-driven decisions. It shows how notions of risk are negotiated between users, developers, and institutions—making visible the contested expertise in defining harm. 

  • Continues this line by identifying hidden information risks in job advertising and developing a taxonomy of problematic events, with metrics showing how these disproportionately affect low-income workers.

  • Proposes novel computational critiques of techniques like Reinforcement Learning from Human Feedback (RLHF), arguing that without attention to diverse user risk perceptions, such methods can amplify rather than mitigate harm.

From these chapters, this Ph.D. thesis offers a framework for quantifying the risks of excluding user perceptions and disaggregated information choices or preferences from AI design. Together, the dissertation offers a unique contribution: It integrates sociological theories of risk with computational tools and empirical studies to surface, categorize, and begin to measure AI-related risks in labor systems-making the invisible visible.

These research threads are illustrated through case studies on user experiences with AI-enabled systems in the context of job advertising and job search. These systems, which include search engines and social media platforms, are vital for participating in labor markets. Individuals depend on these systems to earn income for essential living expenses, while employers invest billions in digital advertising to recruit qualified candidates. Inefficiencies arising from the design of these embedded AI systems can substantially increase the costs of hiring and job searching, affecting the economic stability of both workers and businesses.

Given these stakes, it is crucial to develop robust scientific approaches to identify and measure AI-enabled information system risks. Such approaches are necessary to design effective tools that mitigate unintended consequences. In addition, to prevent perpetuation or amplification of these risks in workplace environments, it is essential to assess the effectiveness of risk mitigation strategies used with these systems. However, the literature lacks a comprehensive framework and methods for computationally assessing the risks of embedded AI information systems in workplaces. This thesis proposal aims to fill this gap by presenting a framework and measures for evaluating the risks associated with AI-enabled information systems used in workplaces and labor markets. 

This Ph.D. thesis proposal aims to address the need to operationalize risk measurement by presenting a framework and measures for evaluating the risks associated with AI-enabled information systems. Through this work, the dissertation objective is to contribute valuable guidance and tools for the design and application of risk assessment and management of information systems in workplaces and labor markets.
 

Proposal (Abstract) Document Link: https://sarakingsley.github.io/dissertation.html