Opening up the Qualitative Al conversation with our free global symposium
Christina Silver, PhD, SFHEA, FAcSS. Associate Professor (Teaching), Director of the CAQDAS Networking Project, Department of Sociology, University of Surrey
Christina Silver, PhD, SFHEA, FAcSS. Associate Professor (Teaching), Director of the CAQDAS Networking Project, Department of Sociology, University of Surrey
Just over a year ago OpenAI released ChatGPT to the world, and immediately qualitative researchers began discussing the implications for analysis. Observing these discussions it was obvious that views were polarised. Many cautions and concerns were evident, with ethical fears around data security at the forefront. Conversely, some were clearly “hook-line-and-sinker” on the bandwagon trail, with claims about time-saving benefits of using AI for qualitative analysis being highlighted. Some established qualitative software developers very quickly harnessed these new generative-AI technologies, integrating them into their programs but they did so in initially very different ways. And we also very quickly saw many new players emerge into the space, developing AI tools for qualitative analysis within academia, designed for market researchers and yet more focused on the commercial sector.
But it was the hype versus horror narratives that got me thinking. In 1989 Nigel Fielding, Ray Lee & Nigel Gilbert organised a conference on the then emerging software for qualitative analysis. That led to the founding of the CAQDAS Networking Project five years later. Ever since there’s been controversy, misinformation, and debate about the role of digital tools for qualitative analysis. Although much of that debate has subsided in recent years, as the use of software has become more accepted and accessible, we now seeing renewed debate on a big scale.
So after investigating and experimenting with these new tools myself, I began a series of posts on my own blog, aimed at clearing up some of the misunderstandings I was seeing. Shortly afterwards myself and Graham Farrant, CEO of the Social Research Association (SRA), decided to organise a symposium on the topic. This partnership allowed us to reach a broad range of social researchers on a global scale; students and academics, government social researchers and market researchers. The recordings of both parts of the symposium are available on our playlist.
Part I was on 24th November 2023 and began with presentations from five software developers speaking the same question: “the what, why and how of qualitative Al in their tool”. To reflect the diversity and breadth of the field I wanted representation from new tools developed in response to generative-ai, established CAQDAS-packages integrating the capabilities of large language models, and others that harness non-generative AI. There are many different definitions of “artificial intelligence” and it’s relevance to qualitative analysis, which this post is not the place to go into. Suffice to say, CAQDAS-packages that employ machine learning, topic modeling, text mining, and other information retrieval techniques including sentiment analysis were also on my radar for the symposium.
Presentations from the developers of CoLoop, DiscoverText, Leximancer, MAXQDA, and WordStat opened the symposium. By no means are these the only options out there, but they do represent the variety. And these presentations bring into focus the epistemological underpinnings of the development of digital tools in the qualitative research space. One thing that qualitative research is not, is homogenous. These presentations illustrate that. After facilitating discussions with the developers I wanted a broader discussion about the methodological implications of these technological developments. One of my long-standing interests is in the interplay between method and tools so the next session was a must in planning the symposium.
Discussing methodological implications required active qualitative researchers, with understandings of methods and their theoretical underpinnings as well as practical experience of using tools. And not just recent experiences, but voices that transcend the years. There are just a handful of people still alive that fit that remit, and luckily the two I invited, who not only have the breadth and depth of expertise I was looking for, but are also currently actively involved in developing qualitative software (both different products from those showcased in our developer presentations) said yes straight away: Dr Silvana di Gregorio (Product Research Director and Head of Qualitative Research at Lumivero who develop NVivo), and Dr Susanne Friese author of the book Qualitative Data Analysis with ATLAS.ti. and founding director of Qeludra that is currently developing a new AI tool for qualitative analysis). Dr. Sarah L. Bulloch who works alongside me at the CNP, led the conversation. She has many talents, among them superb facilitation skills, and so it was a great discussion, I only wish there’d been more time to listen to these three specialists converse.
Part 2 was on 1st December, and was about the practicalities of doing qualitative analysis using AI tools. What are researchers actually doing? Again I wanted breadth as well as depth, in terms of tools, methods, and sectors. We approached many researchers who’ve been using AI and were delighted that David Morgan (Emeritus Prof, at Portland State University) Heidi Hasbrouck and Deana Kotiga (Ethnography Centre of Excellence, Ipsos UK) and Steve Wright (independent CAQDAS trainer), agreed to share their experiences of using generative and non-generate AI. If you’re interested in how academics, market-researchers and teachers of qualitative analysis are using Al, this is the session not to miss.
Something that characterises discussions about Al, whether in the public consciousness or among qualitative researchers; in relation to the generative-ai that’s developing before our eyes, or the potential of generalised-ai in the future, is the issue of ethics. So I knew we needed to attend to this directly, and the symposium concluded with dedicated space to this important topic. Again including a range of voices was the name of the game in terms of inviting speakers. This time it was a more formal panel-style discussion with Prokopis Christou (University of Cyprus) providing an academic voice, Sylvie Hobden providing a voice from government (Centre for Data Ethics and Innovation), and Donna Phillips the agency perspective (Verian). And another great facilitator, in the form of Isabella Pieria, Head of Qualitative Methodology at Ipsos.
Although they began with the ethics question, we didn’t only want to hear about those very important considerations. AI is here to stay, and just like schools need to teach their pupils about how to navigate the internet, social media – and now Al – responsibly, qualitative researchers need to recognise the reality of the presence of Al and seize the opportunities it affords as well as consider the challenges and ethics of using it appropriately. This is important so that the quality of what we do remains at the coal face. These topics are what Isabella discussed with Sylvie, Donna and Prokopis. And it was a fitting way to bring the symposium to a close.
There were many more questions from our 1000+ delegates than we could attend to live, so now our task is to work out a productive, dynamic and creative way to respond to those comments and keep the conversation going. Watch this space for more on that.
And do get in touch if you have ideas for events around these topics we can organise. Nigel Fielding & Ray Lee founded the CNP in 1994 to foster informed debate about the role of technology in qualitative analysis. The need for that remains as pressing as ever, so join the conversation with us, and qualitative researchers around the world.