CMU logo
Search
Expand Menu
Close Menu

HCII Alum Reimagines Online Spaces for All

A man sits on a patio, with diners in the background and the Cathedral of Learning in the distance.
Pranav Khadpe, who earned his Ph.D. from CMU's Human-Computer Interaction Institute, works to design online spaces that help people show up authentically and create consensus.

How we spend time online and participate in virtual spaces can impact how we show up offline, at everything from town halls to book clubs. SCS alum Pranav Khadpe, a senior applied scientist at Microsoft, works to design online spaces that help people show up authentically and create consensus. He earned his Ph.D. in the School of Computer Science's Human Computer Interaction Institute (HCII), where his research spanned topics from understanding the extent to which LLMs flatter their users (known as social sycophancy) to promoting constructive communication for virtual teams.

Khadpe and a team of researchers from Stanford University and the University of Oxford recently published a paper detailing a system to measure social sycophancy in LLMs. Spoiler alert: the models are definitely flatterers.

Do you think how people show up online corresponds to how they show up in their own communities?
I think the rules people impose on a situation are the same, whether they're offline or online. It's the design of the space that determines how those rules manifest as behavior. Shift the design, and the behavior shifts accordingly.

Anonymity, for instance, removes individuality, causing people to behave according to the prototypical traits of whatever group identity they want to signal. A casual gamer in an anonymous forum might adopt the aggressive trash-talking style they associate with "serious gamers." But add reputation systems or persistent identities like being a long-standing Reddit member, and suddenly behavior realigns with offline norms, since people now have a reputation to protect.

One consequence of people applying the same rules to navigating both on and offline is that beliefs and behaviors reinforced online affect how we navigate interactions offline. For example, beauty standards promoted online affect how people perceive healthy bodies offline.

It's like the two worlds start to blend, like how Reddit-speak starts to filter into everyday conversation.
Yeah, that's how you signal you're part of the community. There isn't a clean line between what's online and offline.

The early internet was an escape from "real" life. This was before all your friends were online. But now everyone's online. Communication through a group chat is as normal as talking in person.

If we go a bit deeper and start asking the question of how behavior in one space influences the other, I think it is true that the design of spaces brings out different kinds of behaviors. On Twitter, almost 97% of political tweets come from just 10% of active users, meaning most people are passively lurking. But what this small population presents as normative can end up shaping offline behavior.

And not being in line with others can be intimidating. Do you think technology can make connecting authentically online less intimidating?
There are many things that prevent people from participating or feeling comfortable speaking up online. One big deterrent is reputational consequences. Especially if you're trying to say something that runs counter to a norm, like "our beauty standards are unreasonable." It can feel intimidating because you're worried others might not support you. This is what I focused on in my Ph.D., when people are reluctant to speak up because there's a social cost or perceived risk.

There are specific levers to address that problem. One is anonymity. Changing the design of a space can embolden people to speak up, but anonymity removes accountability. My dissertation focused on a middle ground — action escrows. This allows people to condition their actions on others backing them up. For example, you'd feel more comfortable saying "our beauty standards are unreasonable" if five others also felt the same and spoke up with you. People then had assurance that their comment would only post publicly if backed by others.

Another avenue is designing these middle grounds. Online, there are only two modes of operation, large public spaces and DMs. There's no in-between. Designers are trying to create these spaces.

In 2023, we worked on a project called Nooks, which allowed the creation of small spaces in a public forum. Let's say you're in a large Slack or subreddit, and you're unsure if your topic fits the whole community. With Nooks, you tell the system what you want to talk about, and it privately polls everyone to see if they're interested. Then it creates a small subspace for those people. Because everyone wants to be there, it's easier to express your thoughts authentically.

How did you become interested in this area of research?
Growing up, I was interested in being the funny kid in school. Humor was my point of entry into studying social behavior. It got me curious about social norms and how people infer and construct them. When I started my Ph.D., there was an opportunity to investigate how the design of online spaces affects decision-making and how people construct norms together, even when the community houses people of different beliefs or ideologies.

What tools did you develop while at CMU?
There are three main goals I'm trying to achieve: how to help people feel comfortable initiating a conversation, how to surface the distribution of perspectives in a space, and how to make people feel included or comfortable speaking up authentically. These feed into each other because if you don't feel included, you don't speak up, and if you don't speak up, your perspective isn't represented.

Nooks created circumstances for people to feel comfortable speaking up.

I also worked on Empathosphere, a tool used when a new topic comes up and people are still forming opinions. When a new topic arises, it's not a good idea to ask people to publicly state their opinion because it creates pressure to defend it. Empathosphere allows people to read the room without explicitly stating positions. It asks everyone to rate how they're feeling on a scale of -5 to 5 and gives a sense of the average distribution. It also asks people to guess how everyone else is doing. When people see the distribution of feelings, they invite others into the conversation.

A third project, also in the spirit of balancing perspectives in an online community, studied open-source communities. There are maintainers who improve the tool in the communities and users who share feedback. It's a thankless job to be a maintainer because if things work well, nobody says anything. You get an endless stream of "this is not working," but no signal of what's working well. We worked on a project called Hug Reports, an inversion of bug reports. It gives you an icon in your development environment. When you import a package, it shows a raised-hands icon you can click to send appreciation to maintainers. It scaffolds positive feedback, which is harder to give than negative feedback.

Why does creating consensus online matter, and how is it important to computer science?
In the field of human computer interaction and at CMU, we care about projects that matter to society. I think of my work as using computation to address social problems. For too long, computer scientists shied away from the fact that there are values embedded in their work. Now, most recognize they have a responsibility to interrogate and work with impacted communities to negotiate those values.

As we've seen the adverse impacts of online interactions, computer scientists have had to ask how we are going to moderate these spaces. What values will we prioritize when doing so?

A good example is a large Reddit community introducing an AI-based moderation tool. You face the challenge of deciding what is considered offensive. How do you agree on what is offensive in this community? What do we agree is the right action? Should we downvote this content? Remove it completely? Issue a warning? Ban the user? You need to agree on policies. Even in a small community, you now have decisions to make around technology that require community deliberation.

At one level, the goal of this work is to address social problems with computation, more than advancing computer science for its own sake. Secondly, as computer science becomes more conscious of how values are embedded in the decisions computer scientists make and the artifacts they design, it can create more responsible systems.

What are the things you see on the horizon that you think will be interesting places to continue this research?
I think behaviors reinforced online can shape behaviors offline, even if those spaces look different. One hopeful agenda I have is to think of online spaces as gyms for our civic muscles. In the way you go to a gym to work out, we can use online communities to exercise our ability to express perspectives, listen to those who disagree and productively come to consensus. By giving people ways to work the muscle of speaking up and listening, we could motivate them to show up at town halls, say what's bothering them, listen to others and participate in local discussions.

A second direction I'm currently looking into is driven by how online communities are changing. One meaningful change is that we are increasingly seeing non-people, like AI systems, also participating in online spaces. And these non-people can participate in a conversation and change the course of that conversation. A challenge is if we don't understand the values that are baked into these systems — if we don't understand the ways in which they are affecting our own beliefs and our own behaviors — then it leaves us unsure about the impact these systems will have on collective discourse. Could we then maybe more deliberately choose the values that they promote or the values that they elevate in a conversation?

Because these agentic systems can often be sycophantic — telling you what you want to hear or affirming your preconceived beliefs — we're looking at how that can affect your beliefs about a social situation or issue and your interactions about it. Once we understand those mechanisms, could we then maybe say, 'We want these AI systems to make people more open-minded or to increase intellectual humility in this conversation.'

I'm also working on a more methodological overarching approach to ground the design of our online spaces and our digital technologies more firmly in social scientific understanding. For a long time, a lot of the way computer scientists have gone about designing systems has sort of been accidental. But there's no reason that the original manifestation of those ideas is the best way to design it. For example, at some point, somebody designed an anonymous conversation forum and it stuck. But there's no reason that it had to be anonymous or maybe anonymity wasn't the best design. It was only after we had social scientists step in and analyze these spaces that we could start to systematize the ways in which they shape behavior.

If we bridge the gap between the social sciences and the designers and technologists who are building these technologies, maybe we can be more intentional about how our spaces might influence behavior in them and try to anticipate some of those undesirable consequences while we are in the design process.

Media Contact 
Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu

Author
Marylee Williams

Research Areas