UW News

December 3, 2025

Social media research tool can reduce polarization — it could also lead to more user control over algorithms

Icons for social media apps on a smartphone.

A web-based method was shown to mitigate political polarization on X by nudging antidemocratic and extremely negative partisan posts lower in a user’s feed. The tool, which is independent of the platform, has the potential to give users more say over what they see on social media.iStock

A new tool shows it is possible to turn down the partisan rancor in an X feed — without removing political posts and without the direct cooperation of the platform. 

The study, from researchers at the University of Washington, Stanford University and Northeastern University, also indicates that it may one day be possible to let users take control of their social media algorithms. 

The researchers created a seamless, web-based tool that reorders content to move posts lower in a user’s feed when they contain antidemocratic attitudes and partisan animosity, such as advocating for violence or jailing supporters of the opposing party. 

Researchers published their findings Nov. 27 in Science.

“Social media algorithms direct our attention and influence our moods and attitudes, but until now, only platforms had the power to change their algorithms’ design and study their effects,” said co-lead author Martin Saveski, a UW assistant professor in the Information School. “Our tool gives that ability to external researchers.”

In an experiment, about 1,200 volunteer participants used the tool over 10 days during the 2024 election. Participants who had anti-democratic content downranked showed more positive views of the opposing party. The effect was also bipartisan, holding true for people who identified as liberals or conservatives.

“Previous studies intervened at the level of the users or platform features — demoting content from users with similar political views, or switching to a chronological feed, for example. But we built on recent advances in AI to develop a more nuanced intervention that reranks content that is likely to polarize,” Saveski said.

For this study, the team drew from previous sociology research identifying categories of antidemocratic attitudes and partisan animosity that can be threats to democracy. In addition to advocating for extreme measures against the opposing party, these attitudes include statements that show rejection of any bipartisan cooperation, skepticism of facts that favor the other party’s views, and a willingness to forgo democratic principles to help the favored party.

The researchers tackled the problem from a range of disciplines including information science, computer science, psychology and communication. 

The team created a web extension tool coupled with an artificial intelligence large language model that scans posts for these types of antidemocratic and extreme negative partisan sentiments. The tool then reorders posts on the user’s X feed in a matter of seconds. 

Then, in separate experiments, the researchers had a group of participants view their feeds with this type of content downranked or upranked over seven days and compared their reactions to a control group. No posts were removed, but the more incendiary political posts appeared lower or higher in their content streams.

The impact on polarization was clear. 

“When the participants were exposed to less of this content, they felt warmer toward the people of the opposing party,” said co-lead author Tiziano Piccardi, an assistant professor at Johns Hopkins University. “When they were exposed to more, they felt colder.” 

Before and after the experiment, the researchers surveyed participants on their feelings toward the opposing party on a scale of 1 to 100. The attitudes among the participants who had the negative content downranked improved on average by two points — equivalent to the estimated change in attitudes that has occurred among the general U.S. population over a period of three years. 

The researchers are now looking into other interventions using a similar method, including ones that aim to improve mental health. The team has also made the code of the current tool available, so other researchers and developers can use it to create their own ranking systems independent of a social media platform’s algorithm.

“In this work, we focused on affective polarization, but our framework can be applied to improve other outcomes, including well-being, mental health and civic engagement,” Saveski said. “We hope that other researchers will use our tool to explore the vast design space of potential feed algorithms and articulate alternative visions of how social media platforms could operate.”

Additional co-authors on this study include Chenyan Jia of Northeastern and Michael Bernstein, Jeanne Tsai and Jeff Hancock of Stanford.

This work was supported in part by the National Science Foundation, the Swiss National Science Foundation and a Hoffman-Yee grant from the Stanford Institute for Human-Centered Artificial Intelligence. 

For more information, contact Saveski at msaveski@uw.edu.

This story was adapted from a press release by Stanford University.

Tag(s):