Upping our game in Open Research – joining the UK Reproducibility Network

If you are reading this, you are probably a researcher sitting somewhere along the experience spectrum – from novice to grand master – of how to do research. So, how did you learn to ‘do research right’? How do you know you do research right? If you inhabit the ‘novice’ end of the spectrum, are you confident that you are learning how to do research right?

So, what is ‘right’ exactly? Well…your research needs to be conducted correctly, with well-formulated objectives/hypotheses, using and correctly deploying the right methods, and interpreting the data without experimenter bias. And all conducted in an appropriate ethical and legal framework – for your research team, and for its subject/s. Easy to see the light burning bright on the hill – not so easy to reach it!

The Concordat to Support Research Integrity and Public Trust in Research

Research integrity is a huge topic, much of it the subject of Universities UK’s Concordat to Support Research Integrity, first released in July 2012 under David Willetts as Minister for Universities and Science. The concordat is based on five very sensible commitments: setting the highest standards of rigour and integrity overall; ensuring appropriate ethical, legal and professional frameworks and standards; embedding the culture of research integrity; dealing effectively with misconduct; and an overall strengthening of research integrity.

The topic of research integrity plays out every day. We are the privileged beneficiaries of public funds – funds that might be spent on roads, hospitals or social services, but instead we get to spend on research. When the public confers this prerogative upon us, it does so assuming that we are trustworthy and capable, and the research results we generate can be trusted as being correct and meaningful. Such public trust is by no means a given: denial of anthropogenic climate change, falls in vaccination rates, and rejection of the notion that the Earth is older than the Bible, are all manifestations of the consequences of lack of public trust in Science. The House of Commons Science and Technology Committee Research Integrity report, published on 11 July 2018, reveals similar circumspection. This report takes a detailed look at researchers in the UK — in large measure in the university sector — and finds we come up short, particularly in the independent verification of whether a research institution has followed appropriate processes in investigating misconduct.

Getting it Wrong

When we think of research going wrong we often think of wilful misconduct – but there are many ways it can misfire, with as many consequences. Indeed, a 2018 news piece in Science points out that in somewhat less than half of all retracted papers, the reason for the retraction was attributed to fraud.

[Taken from: “What a massive database of retracted papers reveals about science publishing’s ‘death penalty’”, J. Brainard and J. You, Science, Oct. 25, 2018.]

 

Sometimes, it’s the small stuff. We regularly see papers in which the specified accuracy is not supported by the data – the attenuation coefficient was quoted to be 4.678 dB, when we know it can only be measured to two significant figures, 4.7 dB. Graphs without error bars, images without scale bars, etc. etc. – these are sloppy misdemeanours. But the obvious question then follows: why should I give credence to the paper overall if such carelessness is clearly evident in what I can readily observe? On what basis should I then grant the paper’s findings the benefit of the doubt?

Sometimes, of course, we just get it wrong. Here’s an example of mine, from my early days of supervising PhD students, when we thought we had found a new way to create pulses of light of selectable colour by injecting a small amount of continuous light to set them off. The results looked great. We sent it off for publication. The reviewer wrote our invention was a ‘kluge of dubious utility’. I had to look up what a kluge was! ‘Why so negative?’ I pondered. We rebutted – successfully. The paper was on my desk as a galley proof when the penny dropped: we had been looking only at time-averaged traces; the pulses were not all the same — each one was of a random strength, making the idea worthless. So at the last minute I withdrew the paper – and it was never published. A narrow escape!

There are other instances of ‘getting it wrong’, which you might not even realise you’re doing or do not even perceive as wrong. We want to see patterns in our data and, as humans, we sometimes introduce bias. That is, we might focus on the variables which support our hypothesis, ignoring the variables that don’t (confirmation bias) or we perceive the outcome as something that we predicted in the first place (hindsight bias). These (often unintentional) biases might encourage us to partake in questionable research practices such as HARKing (Hypothesising After the Results are Known) or p-hacking (fishing through data and analyses until the desired results are reached – have a look at http://shinyapps.org/apps/p-hacker/ to see how easy it is to get statistical significance from noise!) All of these research practices lead to false positives, and research that is not reproducible.

So, we might just be sloppy, we might be a bit biased, we might get something just plain wrong, or we might wilfully do something wrong. How much of this is going on? How reproducible are our results overall, and what are we doing about it?

Too many unreproducible results

A 2015 Science article on the reproducibility of psychological science stated, ‘Reproducibility is a defining feature of science, but the extent to which it characterises current research is unknown.’

The field of psychology research has been in the vanguard of criticism and self-examination and, as you can see from the figure below, the results do not look great. In a large-scale collaborative effort to obtain an initial estimate of the reproducibility of psychological science, replications of 100 published experimental and correlational studies were conducted and showed dramatically poorer outcomes than the originals.

[Taken from: Aarts AA, Anderson JE, Anderson CJ, Attridge PR, Attwood A, Axt J, et al. Estimating the reproducibility of psychological science. Science. 2015; 349(6251).]

 

But psychology is not alone. In the field of random clinical pharmaceutical trials, Kaplan and Irvin studied the outcomes of trials conducted between 1970 and 2012 evaluating drugs or dietary supplements for the treatment or prevention of cardiovascular disease. They reported in PLOS One dramatically reduced effectiveness once transparent reporting was mandated.

[Taken from: Kaplan RM, Irvin VL. Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time. Plos One. 2015; 10(8).]

 

Another worrying trend is the emergence of image manipulation as a major cause of retractions, occurring much more frequently than it should be, at rates of nearly 4% – as reported in Bik et al. (Full details on retracted papers in general can be obtained from Retraction Watch.)

The community response: COPE, Open Research, UK Reproducibility Network

The Committee on Publication Ethics, COPE, is an organisation that began in the late 1990s with the remit to support publishers in promoting integrity in research and its publication. Today they offer a wide array of guidance, process and flowcharts. I recommend you review their resources, especially should you have occasion to question the integrity of a piece of published work.

Open Research (sometimes referred to as Open Science) seeks to address such issues, recognising that openness is at the heart of a strong scholarly research culture as a mechanism to increase reproducibility, lower barriers to accessing knowledge, and enable building upon each other’s work.  Openness refers to:

  • making the research results accessible to the widest possible audience;
  • making methods such as computer software available and re-usable; and
  • making data available for scrutiny and re-use.

The Concordat on Open Research Data, championed by a large group of UK organisations led by funder UKRI, is “strongly committed to opening up research data for scrutiny and reuse, to enable high-quality research, drive innovation and increase public trust in research.” And organisations such as the Wellcome Trust are also leading the way.

The UK Reproducibility Network (UKRN), launched in March 2019, is led by researchers as a peer-led consortium originating in the psychological sciences. Its aim is ‘to ensure the UK retains its place as a centre for world-leading research by investigating the factors that contribute to robust research, providing training and disseminating best practice, and working with stakeholders to ensure coordination of efforts across the sector.’

UKRN has a coordinating steering group at its centre. At time of writing, local network leads already exist at 43 UK institutions. The UKRN supports these local networks by providing resources, updates and opportunities for exchange of knowledge and ideas across the local networks. The UKRN importantly also provides a conduit between stakeholders (research organisations, publishers, professional organisations) and researchers, including various developing initiatives regarding best practice. The next step is to bring in the institutions themselves, as formal members of UKRN. Watch this space for Surrey’s involvement in this initiative.

Getting it right: What are we doing at Surrey?

At Surrey, our commitment to a strong and healthy research and innovation culture features prominently in our new Research & Innovation Strategy. Excellence and Integrity are two of our five core organisational values. A commitment to both of these is vital in both conducting and presenting our research.

We already have a well-established Research Integrity and Governance Office (RIGO) along with a set of accompanying policies. Just a few months ago, the University Research & Innovation Committee and Senate approved ambitious two-year plans to revamp and improve our overall research integrity framework, with Dr Ferdousi Chowdhury, Head of RIGO, leading.

Our Doctoral College provides leadership in the training of researchers, co-delivered with colleagues in the faculties, who guide and model best practice in specific disciplinary and research centre contexts. We make our expectations about integrity and excellent practice clear and we monitor carefully to ensure our values are visible in our actions. Specifically, we co-deliver with the Library Open Research team, workshops on:

  • Becoming an open researcher; and
  • Data management plans;

alongside offering one-to-one advice and support to enable researchers to create discipline-appropriate strategies for sharing and disseminating their findings.

What more can we do? Lots!

Our new Director Dr Kate Gleeson has big plans to support researchers to take a more pro-active approach to responsible research. For example, in future postgraduate researchers will create an impact strategy at the outset of their research, including a clear direction for data management and open science, and monitored and modified throughout the life of the project.

Already initiatives on reproducibility are underway.  Are you aware of the Surrey Reproducibility Society? Have you been to the ReproducibiliTEA journal club at Surrey Hive yet?

We can act now by signing the Concordat on Open Research Data  and joining the UKRN, which means appointing an academic lead in this area – so watch this space.

We must all keep working to engender trust and openness within the Research & Innovation community and create a culture of excellence, openness and transparency. Let’s see more people recognising that questionable research practice is a threat to reproducible science, and adopting better methods. And in the event that such questionable practice does occur, whether it be research methods or behaviour, that our researchers have the confidence to speak up, knowing that they will be heard and supported and that action will be taken. Our performance as an institution depends on the behaviour of each and every one of us.

Special thanks to contributions from Emily Farran, Kate Gleeson and Kris Henley.

As ever, thanks for reading.