Colleagues will be familiar with the feeling of excitement that can accompany receipt of the first responses from volunteer research participants for a new project. Such feelings, though, quickly turned to something else altogether for us in January this year. An encouraging initial trickle of early responses to our online advertisement, for men willing to talk to us about their experiences of hair loss, quickly turned into a mini-deluge; well over a hundred emails across a couple of days, most of which uncannily similar to one another.
Concern quickly shifted to the practical challenge of how to respond and, especially, how to avoid losing genuine participants amidst such a large number that appeared to be inauthentic. We began to reply to responders with an information sheet, consent form and request for further information, in the hope that only genuine participants would reply. To our surprise, several we had regarded as questionable replied to the request within a few hours with completed consent forms. What we initially had assumed were bot/AI responses, it seemed, were sometimes connected to human responders ready to fill out forms and participate in an interview.
Thanks to invaluable conversation with colleagues at Surrey and elsewhere, and a small number of published works they have pointed us towards, it has become clear that what we refer to here as inauthentic research participants are a rapidly growing problem in qualitative research. Key here are the monetary incentives offered by many research projects, and the shift towards online approaches to recruitment and data collection. While the opportunities offered by such online approaches are many, they also have made qualitative projects more vulnerable to those seeking to harvest financial incentives.
We offer here some brief early reflections and tentative lessons from our experience.
Where/how to advertise
Our aforementioned experience had followed advertisement of our Journeys of Hair Loss project on social media, including mention of a £30 ‘thank you’ voucher. It was particularly when the ad was posted in fully public spaces such as Twitter that the trickle of responses seemed to become a torrent. Caution in relation to this has prompted us to focus subsequent advertising mostly on more ‘private’, subscriber-only online communities, and also to omit details of the £30 voucher from the main body of our post (in the hope this may make it less likely to be picked up by bots) while leaving it visible in the recruitment poster image we attached.
Through carefully looking for patterns in the communications we have received, and attending to observations made by others, we arrived at a number of indicators that, in their combination, have helped us identify responses likely to be inauthentic. Aligning partially with experience recently reported elsewhere, here were the most important warning signs, in our own experience:
- Title and content: short, unspecific and very similar in wording to others (e.g. ‘I am happy to be involved in your study’) and/or directly reproduces phrases from recruitment ad (e.g. ‘I have experience of hair loss’) with no other detail
- Timing: email received in same time-window (about 48 hours) as many others, soon after posting of the advert. Responses to requests for consent form completion and further information seemed uncharacteristically fast.
- Email address: typically this would be a gmail address, or similar, which consisted of name/s followed a 2-3 digit number – along the lines of firstname.lastname@example.org
- Responses to requests for further information: responses typically included the filled-in consent form but either ignored requests for further information, or provided short answers that lacked detail and, sometimes, indicated questionable knowledge of the phenomenon being researched.
Importantly, these criteria are far from perfect and we found ourselves trying to look at factors in their extent and combination. It is essential to bear in mind that some genuine participants can be reluctant to provide much detail via email, for example, and that this could particularly apply to participants from harder-to-reach groups. Genuine participants may also have email addresses in the format described, or respond quickly, and so on. Nevertheless, where we judged a response to clearly exhibit most or all of the criteria above, we flagged them as likely inauthentic and have declined to engage further for the time being. This has allowed us to proceed promptly with participants we judged likely to be authentic while protecting the integrity of the project. To date, we have completed 16 interviews for the project and are fully confident in the integrity of this data, though we remain vigilant and are ready to consider excluding data in the future should we feel the need to.
Dissuading inauthentic participants
We have also taken some further measures to help dissuade inauthentic participants from continuing to engage, and help us verify cases whose authenticity is ambiguous. Cognisant of Roehl and Harland’s experience of imposter participants declining to use camera during online interviews, we are now requesting that participants have their camera on. We also have introduced and highlighted the possibility of a short pre-interview with participants, should we feel the need to further verify eligibility. Participants, meanwhile, are being asked for greater information including on the circumstances that prompted them to respond, their location and where they saw the advertisement for the study. As a further deterrent, we added a statement to the project information sheet to the effect that we reserve the right to withhold the thank you voucher if we believe a participant to have misrepresented their eligibility. An amendment to our original submission for ethical approval was completed and approved in order to make these changes.
In this study, we relied on email to keep the process of recruitment straight-forward. However, in future it may become increasingly valuable to utilise more systematic filtering of volunteers via survey software. Qualtrics, for example, has a bot-checker and also enables researchers to check IP addresses and verify participants’ location, as our colleague Ranjana Das has helpfully shown us.
Balancing protection of research integrity with the importance of engaging with participants in a welcoming, open and trusting manner seems likely to become a complicated balance for qualitative researchers. It is important to bear in mind that the experience and approach described here reflect our need to quickly develop an effective response to a development we had not foreseen, and we will continue to reflect on the pros and cons of some of these. In future, building a careful, considered approach to such issues should become a key consideration during initial research design and planning.
It is encouraging to see the initial development of a literature on such questions, including the useful pieces we have linked to above. We look forward to many further contributions as experiences, conversations and solutions develop.
Many thanks to our colleagues Rob Meadows, Christine Hine and Ranjana Das for support, conversations and suggestions in relation to this. Thanks also to Trisha Greenhalgh, Brady Robards and others for instigating valuable broader conversations about inauthentic volunteers in qualitative research.
Please note that articles published on this blog reflect the views of the author/s and do not necessarily reflect those of the Department of Sociology.