Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

A New Goal: Aim To Be Less Wrong

H. Armstrong Roberts/ClassicStock
/
Getty Images

At a conference last week, I received an interesting piece of advice:

"Assume you are wrong."

The advice came from Brian Nosek, a fellow psychology professor and the executive director of the Center for Open Science. Nosek wasn't objecting to any particular claim I'd made — he was offering a strategy for pursuing better science, and for encouraging others to do the same.

To understand the context for Nosek's advice, we need to take a step back — to the nature of science itself, and to a methodological revolution that's been shaking the field of psychology.

You see, despite what many of us learned in elementary school, there is no single scientific method. Just as scientific theories become elaborated and change, so do scientific methods. The randomized controlled trial — which we now take for granted as a method for evaluating the causal efficacy of a drug — was a methodological innovation. Statistical significance testing — which is often taken for granted as a method for evaluating the probability that an outcome was due to chance alone — was a methodological innovation.

Triggered by a so-called "replication crisis," the field of psychology has been actively engaged in a critical evaluation of our methods and how they ought to be improved. This has involved a close look at how psychological science is produced, evaluated, and published, alongside a shift in norms concerning how decisions about studies are made and reported.

For example, many academic journals have moved away from a narrow focus on statistical significance testing to consider a broader range of statistical measures. And some journals now allow researchers to submit papers that undergo peer-review prior to data collection, thus ensuring that decisions about the study are not made after seeing the data, and that the evaluation of the paper is not based on whether the data fit in with reviewers' preferences or expectations. These, too, are methodological innovations.

But methodological reform hasn't come without some fretting and friction. Nasty things have been said by methodological reformers; nasty things have been said about methodological reformers. There have been victims (though who they are depends on whom you ask). Few people like public criticism, or having the value of their life's work called into question. On the other side, few people are good at voicing criticisms in kind and constructive ways. So part of the challenge is figuring out how to bake critical self-reflection into the culture of science itself, so it unfolds as a welcome and integrated part of the process, and not an embarrassing sideshow.

In some ways, science is already the poster child for critical self-reflection. As a community, we actively try to falsify our own and other people's ideas. Peer-review is basically structured peer criticism, and the methodological innovations that fuel science come from science itself. But scientists are still humans and, like any human community, the scientific community can benefit from norms that make it easier to embrace our overarching values. These values include a commitment to seeking out and pursuing scientific methods that yield reliable conclusions — even as we use the tools of science to determine what those methods ought to be.

So how can the scientific community better instantiate such norms? What, for example, does this mean for scientists conducting basic research?

When Nosek recommended that I and other scientists assume that we are wrong, he was sharing a strategy that he's employed in his own lab — a strategy for changing the way we offer and respond to critique.

Assuming you are right might be a motivating force, sustaining the enormous effort that conducting scientific work requires. But it also makes it easy to construe criticisms as personal attacks, and for scientific arguments to devolve into personal battles. Beginning, instead, from the assumption you are wrong, a criticism is easier to construe as a helpful pointer, a constructive suggestion for how to be less wrong — a goal that your critic presumably shares.

This advice may sound unduly pessimistic, but it's not so foreign to science. Philosophers of science sometimes refer to the "pessimistic meta-induction" on the history of science: All of our past scientific theories have been wrong, so surely our current theories will turn out to be wrong, too. That doesn't mean we haven't made progress, but it does suggest that there is always room for improvement and elaboration — ways to be less wrong.

One worry about this approach is that it could be demoralizing for scientists. Striving to be less wrong might be a less effective prod than the promise of being right. Another concern is that a strategy that works well within science could backfire when it comes to communicating science with the public. This is a topic I've written about before: Without an appreciation for how science works, it's easy to take uncertainty or disagreements as marks against science, when in fact they reflect some of the very features of science that make it our best approach to reaching reliable conclusions about the world. Science is reliable because it responds to evidence: As the quantity and quality of our evidence improves, our theories can and should change, too.

Despite these worries, I like Nosek's suggestion because it builds in epistemic humility ("there are things I do not know!") along with a sense that we can do better ("there are things I do not know yet!"). It also builds in a sense of community — we're all in the same boat when it comes to falling short of getting things right. Perhaps the focus on a shared goal — our goal as scientists and humans of being less wrong — can help compensate for any harms in scientific motivation or communication.

I also like Nosek's advice because it isn't restricted to science. Striving to be less wrong — rather than more right — could be a beneficial way to construe our aims across a variety of contexts, whether it's a marital dispute or a business decision. I may be wrong about who did the dishes last night, or about which stock is the best investment; if I begin from the assumption that I'm fallible and striving to be less wrong, a challenge may not feel so threatening.

Unfortunately, this still leaves us with an untested psychological hypothesis: that assuming one is wrong can change community norms for the better, and ultimately support better science (and even, perhaps, better judgments and decisions in everyday life).

I don't know if that's true. In fact, I should probably assume that it's wrong. But with the benefit of the scientific community and our best methodological tools, I hope we can get it less wrong, together.


Tania Lombrozo is a psychology professor at the University of California, Berkeley. She writes about psychology, cognitive science and philosophy, with occasional forays into parenting and veganism. You can keep up with more of what she is thinking on Twitter: @TaniaLombrozo

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Tania Lombrozo is a contributor to the NPR blog 13.7: Cosmos & Culture. She is a professor of psychology at the University of California, Berkeley, as well as an affiliate of the Department of Philosophy and a member of the Institute for Cognitive and Brain Sciences. Lombrozo directs the Concepts and Cognition Lab, where she and her students study aspects of human cognition at the intersection of philosophy and psychology, including the drive to explain and its relationship to understanding, various aspects of causal and moral reasoning and all kinds of learning.