Think Out Loud

University of Washington collaborates on study using AI to reduce political polarization on social media

By Sheraz Sadiq (OPB)
Jan. 12, 2026 2 p.m.

Broadcast: Monday, Jan. 12

00:00
 / 
21:46

By many measures, political polarization in the U.S. has grown in recent years. It’s reflected in recent surveys, which show record-high numbers of Americans who identify as conservative or liberal, or the stark differences between Republicans’ and Democrats’ current feelings toward the federal government.

THANKS TO OUR SPONSOR:

Social media can exacerbate this polarization, especially when the algorithms social media companies use feed content that not only aligns with a user’s political views but also attacks the opposing party’s candidates or values. But what if you could bypass that algorithm to make posts that expressed partisan animosity or antidemocratic content less prominent?

Martin Saveski is an assistant professor in the University of Washington’s School of Information who recently explored these questions with researchers at Stanford University and Northeastern University. The scientists developed a tool that used AI to quickly scan social media posts that contained anti-democratic views or political animus, such as support for jailing political opponents. Saveski and his team used this tool in a study with Republicans and Democrats that reordered the participants’ feeds on the social media site X so that anti-democratic or politically hostile content appeared higher or lower on their feeds for seven days during last year’s U.S. presidential election.

Saveski joins us to share the study’s results and the implications of giving users greater control over their social media algorithms.

Note: The following transcript was transcribed digitally and validated for accuracy, readability and formatting by an OPB volunteer.

Dave Miller: This is Think Out Loud on OPB. I’m Dave Miller. Political polarization in the U.S. has grown in recent years. Social media can exacerbate this polarization, especially when algorithms push content that attacks an opposing party’s candidates or values. But what if you could bypass that algorithm to make these kinds of posts less prominent? Martin Saveski is part of a team that recently did that. He is an assistant professor in the University of Washington’s School of Information. He joins us to talk about what he found. It’s great to have you on the show.

Martin Saveski: Thank you for having me, Dave.

Miller: What exactly prompted this research?

Saveski: It’s a great question. I should say, the main subject of this research were this feed-ranking algorithms that really are kind of out of view in social media. They decide out of all the posts that we could see, which ones do we actually see, and in what order? They have a very big impact on our attention and in turn, our attitudes and emotions. But so far, only social media companies have been able to understand them and shape them. So our team built a Chrome extension, a browser extension, that actually enables external researchers to manipulate the order in which posts are shown in people’s feeds, and that gave us an ability to study, well what if we change the feed in a certain way, and to check what effect that has on people’s views.

Miller: What previous research has there been in teasing out the role that the makeup of someone’s social media feed has on the way that they think about people from opposing political parties?

Saveski: There have been a lot of correlational studies which say, well, if you’re exposed to more of this kind of content, you tend to feel a certain way. Maybe you feel more colder towards the other side, but there have been actually very few experiments that show, or estimate or try to estimate, the causal effect of the algorithm itself. The most notable ones were during the 2020 elections, when Meta partnered with external researchers to test some of these most prominent ideas of how to reduce polarization.

One of them was, for example, instead of having an algorithmically sorted feed, actually switching back to a chronological feed where users see the things that are most recently posted by their friends or others, or seeing less politically like-minded sources. So just to expose participants to more diverse views. Interestingly, neither of these two interventions found any effects on political polarizations. They changed how much people engaged with the platforms but had no effects on polarization.

Miller: Did you do this work as a kind of background belief that polarization is, in and of itself, bad?

Saveski: Oh, it’s a great question. I think the part that I want to emphasize is that there are two types of polarization that scientists think about. The first one is issue polarization. In other words, do we disagree on views in terms of gun control or abortion and other important issues in society? So that’s issue polarization, and we have seen that Americans are a lot more divided in terms of this kind of issue polarization.

But what’s fascinating is that we have started seeing, and political scientists have been tracking this other index since the 1970s, in asking people how do they feel about the other party. In the last couple of decades, people don’t only disagree with the other side, but they feel very negatively towards the other side. The hypothesis here is that you having a warm feeling about the other side is necessary to have democratic functioning.

Miller: What do you mean by that?

Saveski: For people to talk to each other, understand each other’s positions, maybe share the same set of facts, and find solutions to the challenges in society.

Miller: How did you decide what kinds of posts to deprioritize in this experiment?

Saveski: Right, I should say that there are many, many ways and many plausible theories of what kind of content might be polarizing. The hypothesis that we had was that if we just make this kind of hyper-polarizing content less prominent, that that would actually help change how people feel about the other side. And as I said, many different theories.

We decided to build on prior work that has articulated eight factors that might be problematic for healthy democratic functioning, and these eight factors varied and range from things like partisan animosity and support for partisan violence, to things like opposition to bipartisanship and social distance and social distrust. So when we analyze the post, we see how many of those factors are present in the post, and that’s how we decide whether to intervene on it or not, whether to change its order or not.

Miller: Can you give us examples of posts that would merit this intervention that the post-X algorithm that you’ve created would have moved down in someone’s feed?

Saveski: I won’t repeat specific examples because I’m not sure I would like to do that on the air, but I’m happy to talk about the general principles. So for example –

Miller: Let me interrupt you, because I’m actually fascinated, even just by your demurring there… is the idea that you don’t want to say on a statewide radio show the kinds of messages that you think are damaging to the body politic? You don’t want to give them another megaphone?

Saveski: I think, some of this language, a lot of it is in how things are expressed, and some of the language is actually pretty inflammatory. What is interesting about this post is that if you analyze each of them individually, you can see that they’re phrased in a way that’s meant to be enraging, to trigger a little bit more engagement. So that’s my primary reason, and if you sort of read the paper closely, we do have some examples that, as I said, are not the prettiest to repeat. But you can see a little bit in general…

Miller: Okay, but broadly, just so we are…I appreciate your care and I won’t push you further to give us exact tweets, but, broadly, what are versions of the language or styles that you would most want to put lower down in someone’s feed?

Saveski: These are things like advocating for violence, for example, or opposing bipartisan cooperation or saying that the other party cannot be trusted. Oftentimes it’s a combination of these things also expressed in a way that’s very negative, that’s meant to trigger a negative reaction.

Miller: This study went on just for seven days. What kinds of differences did you find at the end of it, in the attitudes that people had toward folks from their opposing political parties, dependent on if these posts were deprioritized, if it was the status quo of whatever X was serving up, or if they were actually prioritized, if these inflammatory posts were more likely to show up at the top of your feed?

Saveski: We ran the study for ten days. The first three days, nothing was changed on their feeds. But then, in the remaining seven days, for some people we made this kind of polarizing content a little bit easier to reach, and for some people a little harder to reach. And then for a third group, we didn’t do anything, their feeds were just like they would have been.

We asked them about their feelings about the other party, and this is a standard way of measuring political polarization, where we ask, “How do you feel about the other party?” So if it’s a Democrat, we ask them about how they feel about the Republicans and the other way around. We asked them before and after the study. What we found when we surveyed them at the end of the study was that, if polarizing content appeared lower in their feeds, they felt about two degrees warmer towards the other party, and if they saw this kind of content more prominently, so a little bit easier to reach, we had the opposite effect, the same size of effect but in the opposite direction.

Miller: Am I right, this is two degrees on a scale of 1 to 100?

THANKS TO OUR SPONSOR:

Saveski: Correct, and that may seem small, and if you haven’t heard that before, it’s probably difficult to contextualize, but to give you a sense, we looked at the historical trend of political polarization in the U.S., and this is about three years of change, in terms of the historical trend. So this seven-day intervention changed the people’s feeling to about three years that it would have changed without any intervention.

Miller: Was there any difference in those effects based on peoples’ political leanings to begin with, whether they were Republicans thinking about Democrats or Democrats thinking about Republicans?

Saveski: The very fascinating part about this is that the effects were actually very similar for different subpopulations, whether it was Democrats and Republicans, people of high or low socioeconomic status, young or old – we found no evidence that some groups were more or less affected by the intervention.

Miller: One of the things that’s striking about this study is that you did not rely on a social media company – X in this case – to change its algorithm. You didn’t go to Elon Musk and say, hey, let’s reduce partisanship, please do this thing. Instead, you did this add-on to change the way posts were ordered for the folks in your experiment. How much control can you exert on a feed, as a third party?

Saveski: The way we do this is that first, we see what Twitter would have shown to you. We know what that list looks like, and then we are able to rearrange that list, and sometimes we may make some things more prominent and sometimes less prominent. We do not have the same ability as, let’s say, Twitter has inside, where they have access to any possible post that was posted, but we do still have a lot of control over what people could see.

Miller: But just to be clear, you’re not deleting posts, right? You couldn’t do that. You’re just making it say, so a progressive user’s feed was maybe less full of anti-Maga content at the top or vice versa?

Saveski: Right, this is actually an important distinction. We did not delete content. We could have. We could have completely removed that and just not displayed it to the participants. But instead, what we did was we changed its order, essentially. We made it a little bit… people usually don’t scroll as far, and so, depending on how further down a post is, that makes it more or less likely to be seen.

Miller: And you used AI, an AI model to do this. How much faith do you have in AI to identify posts correctly, and to do what you actually wanted?

Saveski: We actually evaluated how well the model does and to give you an intuitive sense of how good it is, if the two of us sit down and label posts for these eight factors that I previously mentioned, and we ask AI to do the same, we would disagree by about the same amount as we would disagree with the AI. That’s to say, these things are hard to label and sometimes two people may disagree, but AI is actually at a very similar level. That’s not to say it doesn’t make any mistakes, but it’s relatively accurate.

Miller: It reminds me a little bit of when people talk about how good autonomous vehicles are, and other people say, well, it’s not like humans are great at driving either.

Saveski: I think it’s important to have some baseline. We may disagree on some things, and just having that as a perspective is important.

Miller: I want to go back to where you started talking about the kind of “black-boxness” of these social media companies algorithms. Were you able to understand more about how these algorithms work in the process of tweaking what’s served up by these algorithms?

Saveski: Unfortunately, I think not as much, partly because we had a very specific question that we were interested in. I think we really wanted to know whether making this content easier, this polarizing content harder or easier to reach, would have an effect. But that’s a fascinating question, and I think that I have a lot of colleagues that work in this area called algorithmic auditing that try to actually figure out what it is that the algorithms are doing.

And I will say that some of them are actually fairly complicated and very much like a black box, to the extent that even people who build them don’t quite exactly know how certain pieces of content may be ranked. That depends on what are all the other things that could be ranked, so it’s really a difficult problem. But that’s partly what’s exciting about the methodology that we have built. We hope that other researchers will use it to shed more light on how the algorithms work and how can we make them better.

Miller: I think that there’s been a popular conception in recent years that social media companies have been very happy to fill up folks’ feeds with extreme partisan messaging because those messages generate outrage, and outrage could lead to higher engagement, and higher engagement means more ad-revenue.

Did you find any difference in the time spent on the site, in these different groups, the folks who had a lower level, the same level or a higher level of this content?

Saveski: Yes, we did find some interesting differences. The folks that actually saw less, were exposed to less polarizing content, spent about five minutes less on the platform per day. Now what’s fascinating is, and they saw fewer things, they saw fewer posts, because they were less on the platform. But what’s fascinating is that among the things that they saw, they were much more likely to positively engage with them, to like them or repost them. So it’s not clear cut, that decreasing polarization necessarily reduces engagement in all aspects.

And what’s even more interesting is that when we increased the prominence of this kind of content, we didn’t see a big difference. Actually none of those differences were both in time, in terms of time spent, were that big.

Miller: But if you put yourself in a shareholder, or just a private owner’s mind, how do you think Meta or whoever would view this experiment financially, if they don’t care about the future of society – and there are plenty of people who would argue that there’s little evidence that these companies do, I think certainly many people would say that – how do you think they would view your results in it, from a purely financial perspective?

Saveski: Well, I think that is, I wanted to add this, actually, to my previous answer, because I think that is just another way to think about this. Rather than the short-term gains of giving people the most and more enraging content, it’s also important to think about the long-term effects of that. And while people may want that in the moment, in the long term, that may not necessarily be the best outcome or they may not stay on the platform as long, and so I think it’s not necessarily such a clear-cut decision.

I also wanna say that there has been a little bit of a failure of imagination of what these algorithms could do. I think engagement is easy to observe, and naturally the platforms have thought about engagement primarily. But now that we have all these AI capabilities that help us think about or analyze the contents in a very intentional way, I think that opens a lot of exciting opportunities.

Miller: Could your plug-in, this sort of end-user-level moderation, be used to sort feeds with other goals in mind?

Saveski: That’s right, and it could be, not in our current version, but it could be extended to do that, and we are actually very excited about that. In this particular study, we focused on political polarization. But there are many other important outcomes, whether it’s people’s well-being, whether it’s mental health, or use consumption, which the algorithm could be tweaked to make improvements in those areas.

And if you take this to the extreme, this could actually give users autonomy to specify what they want, instead of us deciding or the platforms deciding what should go higher or lower, you might be able to specify what are the things that you would like to see more or less of.

Miller: Within the walled garden of these social media offerings, there’d be some element of personal control to tweak the feeds to our own liking or for our own purposes?

Saveski: Correct, that’s the idea. Again, you know, you cannot invent content, but among the walled garden, as you described it, you could specify what you like more or less of.

Miller: Has working on this study affected the way you think about what’s being served up to you?

Saveski: Oh, interesting question. It certainly has. I think I’m consuming social media probably differently than most people. I worked on this for over a decade, and so it’s been interesting to see how the algorithm has evolved in different ways and it has got me guessing a lot more, I think, why some content is served to me and why not something else.

I think, actually, most of us do that all the time, and have some sort of folk theory of how the algorithm works and what we can do to signal to it to show us more of the stuff that we wanna see, and that’s partly why I think these extensions that give people more autonomy are exciting.

Miller: Martin Saveski, thanks very much.

Saveski: Thank you for having me.

Miller: Martin Saveski is an assistant professor at the Information School at the University of Washington.

“Think Out Loud®” broadcasts live at noon every day and rebroadcasts at 8 p.m.

If you’d like to comment on any of the topics in this show or suggest a topic of your own, please get in touch with us on Facebook, send an email to thinkoutloud@opb.org, or you can leave a voicemail for us at 503-293-1983.

THANKS TO OUR SPONSOR:

THANKS TO OUR SPONSOR: