CMU logo
Search
Expand Menu
Close Menu

CMU Technologists Look to a More Inclusive Future

News

Collaged headshots of William Agnew and Jordan Taylor.
The HCII's William Agnew and Jordan Taylor are just two of the CMU researchers working to ensure that the outputs of large language models represent the communities they reference.

As artificial intelligence becomes more sophisticated and capable of closely depicting reality, researchers at Carnegie Mellon University's Human-Computer Interaction Institute (HCII) are working to ensure that the outputs of large language models represent the communities they reference.

William Agnew, a Carnegie Bosch postdoctoral fellow in the HCII and one of the leading organizers of Queer in AI, focuses much of his research in this space.

"Researchers, corporations and governments have long and painful histories of excluding marginalized groups from technology development, deployment and oversight," Agnew and the other organizers of Queer in AI wrote in a paper on AI risk management. "As a result, these technologies are less useful and even harmful to minoritized groups."

Since he started work with Queer in AI nearly eight years ago, Agnew has used his expertise to analyze the integrity of training datasets for large language models. Through his work, he helps AI developers identify and overcome biases across mediums — generated text, images, voice and music — with the end goal of ensuring the equitable application of these technologies.

In related work, HCII Ph.D. student Jordan Taylor studies how marginalized communities leverage technology, as well as how technology designers and researchers think about marginalized people. He is advised by Assistant Professor Sarah Fox, who leads the Tech Solidarity Lab, and Associate Professor Haiyi Zhu, who leads the Social AI Group.

Taylor's recent research includes an examination of online communities on social media platforms like Reddit, and how they respond to an issue known as hermeneutical injustice — the historical inability of a marginalized group to understand themselves due to external societal restrictions. When looking at these spaces, he found that the digital environment created a unique opportunity for users to interact and see themselves reflected.

"I was looking at the subreddit r/bisexual and trying to understand what people are doing in this community," Taylor said. "We found that people are constructing a particular way of understanding themselves in the world. This includes things like developing ingroup language and ingroup stereotypes. It's constructing these ways to classify themselves and understand how bisexuality is situated in the broader world."

But the ability for communities to gather and build identity in a digital context is often complicated — and, in many cases, hindered — by the pre-existing motivations and frameworks of technology companies. "That kind of ingroup difference is often flattened and erased when we talk about the design of technology," Taylor said.

In his recent research, Taylor has pivoted to examining the changing relationship between AI-generated content and the marginalized communities interfacing with it online, with a particular focus on LGBTQ+ artists' engagement with generative models like DALL-E 3, Midjourney and Stable Diffusion.

"Marginalized communities often use — and have a long history of using — technologies that were not necessarily designed with them in mind, or where there isn't a particular user in mind at all. You end up modulating to the norm and oftentimes that is a white, straight, wealthy Western norm," Taylor said. "That is the gaze through which we're understanding these groups."

For more on the work CMU is doing to understand representation and AI, read the full story on CMU's News website.

For More Information

Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu

Author
Alexander Johnson

Related People
William Agnew, Jordan Taylor

Research Areas
Artificial Intelligence (AI), Fairness, Accountability, Transparency, and Ethics (FATE)