UW News

April 21, 2021

Q&A: It’s not just social media — misinformation can spread in scientific communication too

An aerial photo of a huge room in a library

Academia is not immune to spreading misinformation, write UW researchers Jevin West and Carl Bergstrom in a recent paper.University of Washington

When people think of misinformation, they often focus on popular and social media. But in a paper published April 12 in the Proceedings of the National Academy of Sciences, University of Washington faculty members Jevin West and Carl Bergstrom write that scientific communication — both scientific papers and news articles written about papers — also has the potential to spread misinformation.

The researchers note that this doesn’t mean that science is broken. “Far from it,” write West, an associate professor at the UW Information School and the Center for an Informed Public’s inaugural director, and Bergstrom, a UW biology professor and a CIP faculty member. “Science is the greatest of human inventions for understanding our world, and it functions remarkably well despite these challenges. Still, scientists compete for eyeballs just as journalists do.”

UW News asked West and Bergstrom to discuss misinformation in and about science. Their emailed responses are below:

UW News: Many of us are familiar with the idea of fake news or misinformation on social media. Can you explain how some of these same concepts — such as hype and hyperbole, bias, filter bubbles and echo chambers and data distortion — also pop up in science and science communication? Why does this happen?

man smiling

Jevin West

Science is run by humans, and humans respond to incentives. Scientists have strong incentives to be first to a result and to have their work noticed. Attention is a scarce resource. This creates an environment where scientists, universities, funders and journalists often hype their work more often than their results warrant. One example is an eye-catching paper title or a headline from a science journalist: “Muons upend all of physics.”

man resting arms on railing

Carl Bergstrom

Researchers used to visit libraries and browse printed journals to keep up on the latest scientific research, but this is largely a thing of the past. Today most researchers access the literature through search engines, recommender systems and, to some degree, social media platforms. That creates the same kind of filter bubble problems that we see in society more broadly. Platforms optimize engagement, and the best way to engage a person is to deliver content that grabs their attention. Although the effects are less pronounced in science, it is still an issue that is not well understood and requires more attention.

 

West and Bergstrom are co-authors of “Calling Bullshit: The Art of Skepticism in a Data-Driven World,” which came out in paperback this week.

 

How does a crisis like COVID-19 further fuel these issues?

The COVID-19 crisis, like any major crisis, involves high levels of uncertainty especially at first. As we tried to understand what was happening with SARS-CoV-2 early in 2020, we were looking at a virus about which we had very little prior knowledge — it had never been in humans until just a few months before. In uncertain environments, people are especially eager for answers. This creates an uncertainty vacuum into which all sorts of nonsense flows.

While scientists take their time to understand the origin of the virus, conspiracy theorists provide ready-made answers. Those with specific agendas cherry-pick from the range of research results. Scientists strive to accelerate research by sharing work prior to peer review, but reporters and others do not always treat that work with due caution. Journals try to hasten the peer review process, but sometimes this results in low quality work slipping through.

Despite all these challenges, science has come through remarkably well. Within 15 months, 10 vaccines already have been developed, with more on the way. Scientists sequenced the genome in a matter of days, worked out the structure of the virus and its proteins in exquisite detail, and are using sequence data from around the globe to track the spread and evolution of the virus and its many variants. Despite the challenges noted in our article, science remains among the greatest human inventions for understanding our world.

The term “significant” has a unique meaning to the scientific community. Can you describe that difference? How does the push for significance affect scientific results and papers?

In the science community, “significant” generally refers to statistical significance — the idea that a research result is statistically unlikely under some null hypothesis. This is a tricky concept, not only for the public, but also for scientists. Statistical significance does not necessarily mean that the effect is of a meaningfully important size. The cutoffs for deciding statistical significance differ based on the type of data and the discipline. And once a threshold level of statistical significance becomes entrenched, humans find ways to game the system to reach it — trying different methods until something works, for example. These are major topics of discussion in science today, and researchers look for better ways to report the degree of statistical support that their results carry. Again, as with the other topics discussed in this article, it doesn’t mean science is broken. It just means that science is in an ongoing process of refinement and improvement.

Can you talk about what happens when scientists find negative or non-significant results? Why could this be a problem?

Negative results tend to be boring: This drug doesn’t cure a disease, this sensor does not detect its target, this chemical reaction fails to proceed, this explanation for a phenomenon is unfounded. As a result, people are less interested in reading them, journals are less interested in publishing them and consequently scientists often cut their losses and don’t bother submitting negative results for publication. But this creates problems of its own. If scientists preferentially publish positive results, the scientific record is not an unbiased picture of scientific discovery. The positive results are in journals for everyone to read, while the negative results are hidden away in file cabinets or, more recently, on file systems. Indeed false claims can even become established as fact. Bergstrom and colleagues wrote about this in 2016.

Fortunately, science has recognized this problem over the last decade and has proposed some solutions . For example, some publishers encourage the publication of negative results. Some fields have adopted a system known as “registered reports,” where researchers submit their experiment for peer review before the results are available, and publishers agree before the work is done to publish the results regardless of whether the results end up positive or negative.

What are some interventions that can help reduce misinformation both in science and in communications about science?

The most important intervention is teaching the public what science is and what is not. This includes teaching about the history and philosophy of science. It requires having scientists themselves engaging in the public. It involves calling out predatory journals (non-peer-reviewed journals), being cautious with preprint papers, understanding the tactics of those pushing purposeful and disingenuous doubt about science (e.g., agnotology), and paying special attention to health misinformation that looks like science but is often anything but.

With more people paying attention to science and preprints right now thanks to the COVID-19 pandemic, what are some steps the general public can take when looking at preprints or news stories about science?

The rise of preprints is a good thing for science. Instead of waiting years for results, research findings can be made available immediately. During the pandemic this has been critical. But this shortened time scale comes at a cost. Preprints are not peer-reviewed. Peer review can take months and even years, and it doesn’t guarantee foolproof results. But it does a reasonably good job at filtering out the crackpot papers and those with obvious problems.

The public and journalists have to be extra careful with preprints. There have been preprints during the pandemic that have spread across the media landscape, even though there have been major problems with the paper and even debunked by more credible experts. If referencing newly deposited pre-prints, readers should invest more time into investigating the author, lab and institution pushing the results. When sharing results from preprints, it is important to tag the paper as non-peer-reviewed.

That said, some of the worst and most damaging papers published during the pandemic have gone through peer review, including a paper at Lancet that led to the cancellation of clinical trials — and later turned out to be fraudulent — so we have to be careful not to let up our guard on the peer-reviewed literature as well.

For more information, contact West at jevinw@uw.edu or Bergstrom at cbergst@uw.edu.

Tag(s):