What is the purpose of this project?
Transparent Replications by Clearer Thinking aims to advance social science knowledge, reward research transparency, and incentivize publication of high quality social science research. By conducting replications shortly after new studies are published in top journals, we help academics and others evaluate recent research findings, improve study design, shift incentives, and contribute to scientific understanding in psychology.
We reward the use of best practices, including pre-registration of research designs, making study materials publicly available, and other efforts to increase transparency and make replication possible. Our project celebrates research that uses these best practices, with the aim of further incentivizing their spread.
Our focus on recent studies advances understanding on current research questions by increasing confidence in published results where appropriate.
Ultimately, our goal is to improve the effectiveness of the scientific community at answering important psychological and social science questions.
Why is replication important?
The purpose of scientific research is to produce knowledge. That sounds obvious, but it’s important to emphasize that the goal of all of the work that researchers are doing – running studies, analyzing data, publishing results – is to find out what is true about the world. The scientific method is the most reliable means humanity has invented for accomplishing that important and difficult goal. One of the fundamental tools in the scientific method toolkit is replicability – a claim made in scientific research should come along with enough information about how it was tested that another person could do the same test and see if they get the same result. Science without replication fails to be self-correcting, because new research builds on past errors and false positives instead of fixing them.
Despite the crucial importance of replication to the scientific process, replication studies are rarely conducted. Academic researchers are encouraged to focus on developing their own work, and are typically not rewarded (or funded) for spending time conducting replications. This has contributed to a “replication crisis” in the social sciences (as discussed in both academic work and in the popular press). When researchers did attempt to replicate many prominent and widely-accepted social science findings, they found that a large number of the results were not supported by the replication experiments.
There are a number of reasons why a study, even one that is well designed, may have results that are not supported when the study is replicated (e.g., there may be subtle differences in the experimental design or populations between the two studies, or bad luck may simply have occured). Even the best researchers will sometimes have their studies fail to replicate, so a failure to replicate should not be an indictment on the researcher. That being said, when a study fails to replicate it should reduce our credence in the claims of that particular study.
Since knowledge in any field builds on a foundation of prior work, it’s crucial that the findings that become accepted as true are reliable, and not simply the result of statistical flukes. Building norms of transparency and replication into the standard process of scientific research makes the search for truth far more reliable.
Who created this project?
Transparent Replications is a project of ClearerThinking.org. On its website, Clearer Thinking provides free educational tools and learning modules that make current social science research accessible to the general public in order to help people improve their lives. This project extends that mission to advancing social science research itself.
Initial funding for this project came from grants, including seed funding from an ACX grant.
How do you select studies to replicate?
We chose 5 journals to focus on for our replication efforts – Nature, Science, The Proceedings of the National Academy of Sciences (PNAS), Journal of Personality and Social Psychology (JPSP), and Psychological Science (PSci). These journals were selected on the basis of their impact factors. To help maximize the impact of our work, we decided to focus our replications on journals that are among the most read, most cited, and most influential in shaping psychology research.
We aim to replicate a study from every new psychology paper published in Nature or Science (within our budgetary and technical constraints), because these journals have a very high degree of influence, publish from all scientific fields and publish psychology papers relatively rarely. For the other three journals we run replications from, we evaluate all newly published papers to see if they are candidates for replication by our project, and then we select randomly from that pool of eligible papers seeking to balance the number of papers chosen from each journal, and to give each eligible paper a relatively equal chance of being selected for replication. Within a paper, if there are multiple studies presented, we select a study to replicate based on the centrality of its claims to the paper overall, whether the results seem non-obvious, and our ability to run a high fidelity replication of the study.
For more about how we select studies see What We Do.
How can I contribute to the project?
If you are interested in helping conduct replication studies, we are looking for people with social science research skills including experience running studies and performing statistical analyses. If that describes you, you might be a good candidate for our Replication Scholars program. Fill out our short interest form, and we’ll get in touch!
This project is funded in part through grants. If you know of grantmaking programs for which Transparent Replications may be a good fit, please let us know.
If you believe in our work and would like to help us continue it, we’d really appreciate it if you’d support us on Patreon.
For more ways you can contribute, check out the Get Involved page.
What criteria do you use to evaluate the papers you replicate?
We use three main criteria for evaluating studies: transparency, replicability, and clarity.
- Our transparency criterion rewards researchers who engage in best practices by being transparent about how their research is conducted. Making experimental materials, data, and analysis code publicly available is one important part of our transparency rating – this rewards researchers who make their work easy to evaluate and easy to replicate by being forthcoming about all of the relevant details. Additionally, we evaluate whether researchers pre-registered their planned analyses and how consistent their methods were with their pre-registered plan. By publicly registering their hypotheses and planned analyses for testing them, researchers are being transparent about their research process. This form of transparency is important because the researchers have to make numerous choices while collecting, cleaning, and analyzing data. If these choices are made after looking at the data, it can lead to results that are influenced by researchers’ expectations or result in p-hacking. Pre-registered analyses ensure that these choices are made before data is collected and analyzed, reducing what are called “researcher degrees of freedom,” and reducing confirmation bias.
- Our replicability criterion is an evaluation of how well the findings reported in the original study matched the results of our replication of the study. It is important to note that there are a number of reasons that even a well-designed and conducted study may not replicate (e.g. potentially small variations in study design and study population, or even just bad luck). To help ensure our accuracy, our replication study materials are shown to the original research team prior to running the study so that they can point out any potential deviations from their original study design, which we then will correct before running the replication. Even the best researchers will, at times, produce work that doesn’t replicate. That being said, a failure to replicate should reduce our credence in the result of a particular study. Replication results help inform our understanding of the generalizability, robustness and reliability of research and they help move the scientific community towards more accurate understandings of important phenomena.
- Our clarity criterion assesses how likely a reader is to misinterpret the results of the study based on the discussion in the paper. Clearly presenting the results so that readers make accurate inferences about what the study does and doesn’t imply can be difficult, but it is extremely important to help ensure that accurate conclusions are drawn by readers. Prominent studies’ findings are built on by other researchers, and may be spread to the general public as well (e.g., through media coverage). Although researchers certainly do not have complete control over how other researchers, the media and the public interpret their results, a clear presentation that doesn’t overclaim can reduce the chances of misinterpretation.
How do you know that your replications accurately reflect the original study?
We always seek consultation with the original study authors at the beginning of any Transparent Replications project to get feedback on our proposed replication study design, to ensure we haven’t missed anything important about their original study protocol. Using information from communication with the original study authors and the information they have made publicly available about the methods they used, we aim for a very close match between our replication and the original study.
If there are areas in which our study may diverge from the original study, we explain them in our replication report. If we are unable to be confident in the accuracy of our replication, our report reflects that.
Do the original researchers have a chance to share their opinion on the replication results?
Yes. We provide the original research team with a copy of our replication report before we publish it, and we give them an opportunity to write a response which we publish alongside our report.
How do you make sure the studies you replicate are ethical to re-conduct?
The Replication Project is deeply committed to ethical research practices, and uses several safeguards to protect human subjects participating in our projects.
The Replication Project does not attempt replications of any studies about which we have ethical concerns. Our process is guided by the principles outlined for ethical human subjects research that Institutional Review Boards (IRBs) use to evaluate research projects and we limit our replication studies to types of studies considered to be very low risk to participants – online surveys or experiments involving minimal behavioral interventions with adults that have given prior informed consent. Participants use their own computer or mobile device, and the information collected is not recorded in a manner that would allow subjects to be easily identified.
All research involving human subjects that The Replication Project conducts attempts to closely match the protocol used in the study being replicated, and all of the original studies that this project attempts to replicate received IRB approval. We always seek consultation with the original researchers to achieve as much clarity about research procedures and fidelity to the original study design as possible.
Additionally, all studies The Replication Project conducts receive an independent ethics evaluation before they are conducted. The ethics evaluation is conducted by people who are independent from the project, and who do not conduct any research studies for it, so that they can provide an unbiased review free from conflicts of interest. The ethics evaluator also receives and responds to any complaints about our research practices. The Replication Project makes the contact information for the ethics evaluator available to all participants in our research.
Why do replicators for this project have the option to stay anonymous?
A replication effort can feel like a high stakes situation for the original researchers. Although our aim is to celebrate researchers engaging in good scientific practices, it is inevitable that some studies will fail to replicate. Giving members of the project team the option for anonymity allows them to follow the data where it leads, free from any concern about potential pressure influencing their results.
Our aim is to present our results in a way that is transparent and objective so that the results can be evaluated on their own terms. We make our experimental materials, data, code, and analysis procedures available so that any claims we make can be verified.