Guest Post by Susanne Friese, Director of Qeludra B.V. and developer of QInsights.
Qualitative research is entering a transformative era, driven by the integration of artificial intelligence (AI). This shift challenges traditional practices, including the long-standing reliance on coding as the foundation of qualitative data analysis. While some may view this change with scepticism, others see it as an opportunity to rethink and reimagine methodologies. In this post, I explore how AI is reshaping the field—moving researchers beyond coding to more dynamic and dialogic approaches to data analysis. Drawing from my own experiences and experiments with AI-based tools, I discuss the potential, the challenges, and the exciting possibilities this paradigm shift brings to the future of qualitative inquiry.
Guest posts express the views of guest contributors and do not necessarily represent the views of the CAQDAS Networking Project or constitute an endorsement of any product, method or opinion.
The Dawn of a New Era in Qualitative Research
I have been exploring the integration of AI into classic CAQDAS tools, emerging AI-based qualitative analysis platforms, and conversational AI models like ChatGPT and Claude for qualitative data analysis. Based on these experiments, my prediction is that the qualitative research community is on the brink of a profound transformation. My hypothesis is that, in the future, qualitative researchers will move away from coding data entirely. This vision has generated both excitement and resistance, as researchers navigate the implications of adopting AI-assisted methodologies.
The Evolution of Coding in Qualitative Research
Historically, coding has served as the cornerstone of qualitative analysis, providing a structured approach to categorize and interpret data (Ardorno, 1950; Becker and Geer, 1960). From early manual techniques to the advent of CAQDAS software, coding has evolved into a systematic, widely accepted methodology. Yet, it is essential to clarify the distinction between “tagging” data—a mechanical process of labelling segments—and coding as a methodological act. Coding is an iterative process where a researcher goes back and forth in the material, adjusting code labels and rearranging coded segments until a stable code system has been developed. This nuanced act has been central to many qualitative analysis approaches (Bazeley, 2013; Clark & Brown, Friese, 2019; Richards, 2009; Saldaña, 2021; Stevens, 2023; Strauss & Corbi, 1990).
My prediction is with AI, the need for tagging data and relying on the classic CAQDAS software is poised to become obsolete. This claim, admittedly bold, suggests that while researchers can still categorize data if their analytical principle demands it, the methods for achieving this are fundamentally shifting. For example, instead of attaching a code label to every segment, researchers can engage in dialog with an AI assistant and ask questions about the data.
Example of Dialogical Analysis
Here’s an example from my thesis, where I conducted a study on impulse buying with its extreme manifestation in shopping addiction behaviour interviewing 57 consumers. Using an AI assistant, I could ask questions such as: “How did the respondents describe their impulse buying experiences?”
The Answer:
Impulse buying is often characterized by a lack of concern for budget and financial constraints, driven by emotions such as excitement, boredom, or a need for a mood boost. Many find themselves purchasing items spontaneously, whether it’s clothes, items for children, or even larger purchases like vehicles, without prior planning or deliberation. Sales and special offers frequently trigger these unplanned purchases, leading to a mix of regret and satisfaction. While some feel a high from spending, others experience guilt or a sense of achievement, despite sometimes buying items they already own. The impulse to buy can be strong, often resulting in purchases that are later hidden or rationalized as fulfilling a need or saving money.
In my study, I identified three distinct shopper types. The summary answer above reflects responses across all participants, but clear differences already emerge. For instance, some respondents reported feeling guilt after an impulse purchase, while others described a sense of achievement. To delve deeper into how each shopper type perceives impulse buying, I can ask the AI assistant to provide insights categorized by shopper type.
Commonalities across shopper types were:
- Emotional Drivers: All three types of shoppers—addicted, utilitarian, and compensatory—experience impulse buying as driven by emotional states. Whether it’s boredom, a desire for change, or feeling ‘fed up’, emotions play a significant role.
- Spontaneity: Impulse buying is characterized by a lack of prior planning across all groups. Each type engages in purchases that are often unplanned and spontaneous.
- Regret and Guilt: Many shoppers, regardless of their category, report feelings of regret or guilt after impulse purchases, indicating a common awareness of the consequences of their actions.
Differences were:
- Motivation: Addicted shoppers often buy impulsively due to cravings similar to addiction, while utilitarian shoppers may act on previously considered items when a good deal arises. Compensatory shoppers are motivated by emotional states and the enjoyment of unexpected finds.
- Perception of Value: Utilitarian shoppers tend to value planned purchases more and may find less satisfaction in impulse buys compared to those who prioritize immediate gratification, like addicted and compensatory shoppers.
- Post-Purchase Behaviour: Addicted shoppers may hide purchases or justify them, while utilitarian shoppers like Shirley prefer to return items they regret. Compensatory shoppers view impulse buying as playful and mood-enhancing, often embracing the spontaneity without much concern for returns.
Hence, while addicted, utilitarian, and compensatory shoppers share some common emotional and behavioural traits regarding impulse buying, their motivations, perceptions of value, and post-purchase behaviours highlight distinct differences that define their shopping experiences.
If I need more detail, I can always ask follow-up questions. Using a purpose-built tool for qualitative analysis, like for instance QInsights, I can also verify results by referring back to the original source files. This approach bypasses the traditional process of coding and retrieval, replacing it with a more direct, interactive dialogue.
For my dissertation research, I coded all of the 57 interviews. One of the codes was: “definition of impulse buying,” and I applied it 169 times. This allowed me to retrieve all the data segments tagged with this code, sift through them, and further refine my analysis.
The coding process has often been described as segmentation—breaking data into smaller parts—which must later be reassembled into a cohesive narrative, establishing relationships between codes and coded segments. Asking questions through an AI assistant, however, feels less like cutting and pasting and more like engaging directly with the narrative of the respondents’ stories, maintaining their context and depth.
AI as a Catalyst for Change
From the above example, we can see that generative AI introduces a new way of interacting with qualitative data—through dialogue rather than traditional coding. This conversational approach enables researchers to engage with their data dynamically, asking questions and receiving contextualized answers. Such interaction offers rich opportunities for deeper insights without the necessity of codes.
This shift has not been without its critics. Detractors argue that AI-driven analyses lack transparency and replicability. Transparency, they claim, can only be achieved if a clearly defined coding system is provided, enabling other researchers to replicate the study. With an AI assistant, critics point out that we don’t fully understand how answers are generated.
However, I would argue that this concern applies equally to human researchers developing a coding system. The thought process behind creating a code label, deciding which data segments to assign it to, or modifying code labels and segment lengths is rarely transparent. These are subjective tasks intrinsic to coding, yet as external evaluators of a study, we typically only see the end result—the code system. Even when validated through inter-coder agreement, this process remains a “black box,” offering no insight into the decisions and motivations behind the scenes.
I would argue that AI-assisted analysis can actually offer greater transparency than traditional coding. Unlike conventional methods, where the researcher’s decision-making process remains largely hidden, I can share the complete conversation history with the AI assistant. This record provides a detailed account of the iterative interactions between the researcher, the AI, and the data. In many ways, this documented dialogue offers a richer and more transparent view of the analytical journey than a final code schema ever could.
Dismissing or overlooking the potential of AI in qualitative research may stem from two key factors. First, the industry has been flooded with false promises from some players, claiming that AI delivers fully automated, instant answers with the press of a button. Second, I believe there is a lack of hands-on experience with AI assistants, leading to misconceptions about how they can be effectively integrated into qualitative research.
My recommendation is: embrace the challenge of working with an AI assistant and immerse yourself in the analytical process. Experimentation is essential—not only to learn how to use AI effectively but also to ensure it is applied ethically. As Freeman wrote in 2016, you only learn analysis by doing analysis. While she wasn’t referring to AI, her point still holds. Qualitative analysis is a skill that improves through practice, and the same applies to incorporating new tools into your workflow.
In case you wonder why we should change something that has worked for decades and has a proven track record? The reasons are compelling:
Saving time: While ethical, responsible analysis doesn’t mean automating the entire process, collaborating with an AI assistant can significantly speed up your workflow.
Enhancing the quality of analysis (hear me out!): Imagine leveraging an AI assistant trained on centuries of accumulated human knowledge, dating back to Aristotle. Even if you don’t accept the interpretations it offers, an AI assistant can spark new ideas, helping you see patterns or insights you might have otherwise missed. It can easily provide multiple perspectives on a word, a data segment, or potential contexts—perspectives that might not emerge from your own training or experience, or that memory might fail to recall.
Driving innovation: Integrating AI into qualitative research allows us to push the boundaries of the field. By adapting traditional methodologies, we can uncover new possibilities while staying true to their core principles. Furthermore, AI opens the door to entirely new methodologies that were previously unimaginable, enabling researchers to ask different questions, explore new dimensions of data, and advance the field in groundbreaking ways.
Ultimately, the value lies in how you choose to work with an AI assistant. It’s a tool to expand your thinking, not replace it.
Conclusion: Embracing the Shift
As the qualitative research community navigates this paradigm shift, the focus should not be on whether AI will replace coding but on how it can expand our analytic toolkit. Tools like QInsights exemplify this potential, fostering collaboration between researchers and AI to achieve rigor, quality, and innovation.
The future of qualitative analysis lies in our willingness to adapt and experiment. By embracing AI as a co-intelligent partner, we can transform challenges into opportunities and chart a course for methodological advancements that honour both tradition and innovation. This transformation is not about losing the art of qualitative research but redefining it for a new era.
AI use: This post was drafted by the author. AI was used to correct or improve the written text and then edited again by the author.
References
Ardono, T.W. Frenkel-Brunwik, E., Daniel J. and Sandford, R.N. (1950). The Authorian Personality. New York: Harper.
Bazeley, P. (2013). Qualitative Data Analysis: Practical Strategies. SAGE Publications.
Becker, H.S. and Geer, B. (1960). Participant observation: The analysis of qualitative field data. In: Richard Na.A. and Jack J.P. (eds). Human Organization Research. Homewood IL: Dorsey Press.
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
Freeman, M. (2016). Modes of Thinking for Qualitative Data Analysis. Routledge.
Friese, S. (2019). Qualitative Data Analysis with ATLAS.ti. London: SAGE Publications.
Richards, L. (2009). Handling Qualitative Data: A Practical Guide (2nd ed.). SAGE Publications.
Stevens, Peter A.J. (2023). Qualitative Data Analysis: Key Approaches. London: SAGE Publications.
Saldaña, J. (2021). The Coding Manual for Qualitative Researchers (4th ed.). SAGE Publications.
Strauss, A., & Corbin, J. (1990). Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. Sage Publications.