Until very recently I had never heard of Open Research and what it meant. I came across an article in The Conversation that had a headline that caught my attention ‘Medical research is broken: here’s how to fix it’. On reading the article I learned about the reproducibility crisis. I was astounded to read that as much as 85% of medical research is ‘wasted’ and that about 50% of medical research is not published (Lloyd & Bradley, 2020). No domains are immune to the reproducibility crisis. For example, in psychology, in an attempt to reproduce findings of 100 studies The Open Science Collaboration found that only 36% of these studies could be reproduced (Gilbert & Strohminger, 2015). When another researcher cannot replicate an experiment we naturally question the validity of the results of the original experiment. This has serious implications for research and science as replicating studies is crucial to verify findings, to advance knowledge and to build a robust evidence-based literature. So how could that be? The relatively high prevalence of questionable research practices (QRPs) such as selectively reporting hypotheses and excluding data post-hoc (John et al., 2012) as well as the non-publication of failed studies, ‘null results’, all seem to be playing a role in maintaining the reproducibility crisis. QRPs such as excluding data after looking at the impact of doing so on the results can significantly improve the probability of finding evidence in support of a hypothesis, making it harder to subsequently replicate the study (John et al, 2012). QRPs although problematic and widespread are not unequivocally unacceptable but call for greater transparency in the research process. The factor I am most intrigued about is the non-publication of null results. A null result is a result that does not support the experimental hypothesis and is therefore difficult to publish because many perceive these results as less interesting.
Journal editors tend to publish studies that are statistically significant, which are more likely to be cited and therefore raise the reach and impact of their journals (Fanelli, 2010). Journal editors unwittingly contribute to a vicious cycle: researchers are more likely to engage in QPRs to get a positive result so their research has a higher chance to get cited and be accepted for publication by editors (Wenzel, 2016). Unfortunately, this unhelpful pattern works against the publication of many rigorously well-conducted studies that yield a null result.
Publishing only successful studies falsely gives the impression that experiments almost always work and that research gets it right every time. Research is usually, by nature, exploratory; some studies will yield positive results, but null results form an integral part of the process of exploration. Many researchers would be willing to spend time and effort to publish a result that does not support their hypothesis, however they face a barrier of finding a journal that will be willing to publish it. Many journals and editors steer away from null findings because null findings are cited less frequently (Mlinarić, Horvat, & Šupak Smolčić, 2017). This means that there is a publication bias against the null result whereby successful studies are more likely to be published. Since null results do not always support what we expected to find it can make them difficult to interpret however they are an integral part of scientific research. The endeavour of research involves keeping track of our unsuccessful attempts and learning from them. As researchers, we reflect and review what worked and what did not work, make the necessary amendments and try again. It takes us many trials to refine and master a new skill. The same is true in science; many trials are often necessary to understand our findings.
Unfortunately, the non-publication of null results means that useful data is kept away from collective knowledge. Publishing null results helps the scientific community by allowing researchers to build on previous studies. The publication of unsuccessful attempts has the potential to save other researchers’ time. Indeed, a new researcher may use a similar study protocol than a previously unpublished null study and risk having their findings go against their predictions leading to a null result.
I also wonder about the influence of the researcher’s experience and belief system on their willingness to publish their null results. It is conceivable that some researchers may fear exposing that their theory had to change in light of the null findings. This could undermine their previous papers. This may be particularly true at the start of a career when the desire to be published in a renowned journal is highly enticing, which may encourage researchers to hold back from sharing novel hypothesis or engage in QRPs. Moreover, those researchers who publish a null result that challenges the current theoretical understanding of a particular issue may fear the consequences of questioning the status quo. It takes time and resources to publish results that do not support a carefully thought out hypothesis. It is understandable that researchers are reluctant to publish their null results, as there is little incentive to do so. Furthermore, established and new researchers alike may wish to preserve their status and not publish a null result that would contradict previous findings and may attract ‘the wrath’ of the original researcher (Nature editorial, 2017).
In the field of psychology, progress would not have happened if researchers had not carefully considered why their result went against their prediction. That is, null findings have the potential to move theory forward and challenge current theoretical understanding. For instance, a comprehensive study on candidate genes for depression challenged previous studies that had seemingly demonstrated that there was an association between genes and major depression. This comprehensive study yielded a null result. Because the study had been rigorously conducted it suggested that there was no significant association between genes and major depression (Border et al., 2019). More recently, the null findings of a study on approximate arithmetic training challenged the claim that training improves arithmetic fluency in adults; a theory that had been supported by several training studies with a positive result (Szkudlarek, Park & Brannon, 2020).
Commitment to open research practices by universities, funders and publishers is starting to have an impact on this issue by raising the visibility of null findings among students and researchers. The Registered Report which is a publishing model that works on the basis of ‘in-principle acceptance’- a study is accepted for publication before knowing its outcome – can help alleviate the publication bias. The in principle acceptance is likely to increase the publication of null results as the criteria for publication is not the result itself but the quality of research (Soderberg et al., 2020).
In order to challenge the publication bias a group of editors from prestigious journals (European Journal of Work and Organizational Psychology, Journal of Business & Psychology, etc.) have committed to publish null results resulting from rigorously conducted research. This is part of a new initiative to enhance the integrity and quality of research (Wenzel, 2016). Perhaps another solution to encourage the scientific community to share their null results would be to publish a summary of all studies in an online ‘null results’ journal, which would enable other researchers to access them. An additional solution could be to add a mandatory section in every paper summarising the previous known null results, what was learned from them, and how they led to the publication of the conclusive study. Pre-registration – a time-stamped record of a study design & analysis plan created prior to the data collection (https://www.ukrn.org/primers/; Farran, 2020)- could also help that process by encouraging researchers to be transparent about their protocol, data analysis and outcome expectations from the outset. It takes a level of confidence to publicly admit that after rigorously conducting a study our results do not support our experimental hypothesis. I wonder if that ‘admission’ could be eased by re-affirming that research is often exploratory: it would be unrealistic to always expect a positive result in research. Changing the narrative of null results into a story of learning and progress may be supportive of researchers sharing their null results with the wider scientific community. By taking the focus of research off the results and moving it onto the quality of the study protocol and how rigorously a study has been conducted, I believe we can reduce the prevalence of QRPs and give null findings their rightful place in the world of scientific research.
I believe that increasing co-operation could also be helpful. The publication of null results carries the risk that other researchers using a similar protocol and have a positive outcome potentially taking credit away from the original researcher. A change in culture where collaboration is valued above competition may help that movement. We also need a common framework for replication of studies with standardised protocols including the publication and/or consideration of null results.
Null findings tend not to be published, which I feel is a waste of resources. The publications of these findings would benefit the wider scientific community. Research at its best is a collective effort with collaboration between organisations, increased transparency, and a smaller emphasis on competition and protection of resources.
Border, R., Johnson, E. C., Evans, L. M., Smolen, A., Berley, N., Sullivan, P. F., & Keller, M. C. (2019). No Support for Historical Candidate Gene or Candidate Gene-by-Interaction Hypotheses for Major Depression Across Multiple Large Samples. The American journal of psychiatry, 176(5), 376–387. https://doi.org/10.1176/appi.ajp.2018.18070881
Fanelli, D. (2010). “Positive” results increase down the Hierarchy of the Sciences. PloS one, 5(4), e10068. https://doi.org/10.1371/journal.pone.0010068
Farran E. K., (2020). Is pre-registration for you? Retrieved from https://blogs.surrey.ac.uk/doctoralcollege/2020/01/06/guest-blog-prof-emily-farran-is-pre-registration-for-you/
Gilbert, E., Strohminger, N. (2015) We found only one-third of published psychology research is reliable – now what? The conversation. https://theconversation.com/we-found-only-one-third-of-published-psychology-research-is-reliable-now-what-46596
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling. Psychological Science, 23(5), 524–532. https://doi.org/10.1177/0956797611430953
Lloyd, K. E., Bradley, S. (2020). Medical research is broken here’s how we can fix it reference. The conversation. https://theconversation.com/medical-research-is-broken-heres-how-we-can-fix-it-145281
Mlinarić, A., Horvat, M., & Šupak Smolčić, V. (2017). Dealing with the positive publication bias: Why you should really publish your negative results. Biochemia medica, 27(3), 030201. https://doi.org/10.11613/BM.2017.030201
Rewarding negative results keeps science on track. Nature 551, 414 (2017). https://doi.org/10.1038/d41586-017-07325-2
Soderberg, C. K., Errington, T. M., Schiavone, S. R., Bottesini, J. G., Singleton Thorn, F., Vazire, S., … Nosek, B. A. (2020, November 16). Initial Evidence of Research Quality of Registered Reports Compared to the Traditional Publishing Model. https://doi.org/10.31222/osf.io/7x9vy
Szkudlarek, E., Park, J., & Brannon, E. (2020). Failure to replicate the benefit of approximate arithmetic training for symbolic arithmetic fluency in adults. Cognition. 207. 104521. 10.1016/j.cognition.2020.104521.
Wenzel, R. (2016). Business journals to tackle publication bias, will publish ‘null’ results. The conversation. https://theconversation.com/business-journals-to-tackle-publication-bias-will-publish-null-results-52818
Written by Badri Bechlem