Guest post by Deana Kotiga, Freelance Cultural Researcher and Strategist
AI DISCALIMER: A note before you begin: this article, which argues that large language models cannot do what human ethnographers do, was written with the help of one. The structure was workshopped with an LLM. Some sections were drafted collaboratively, then pulled apart and rebuilt. The irony is not lost on me. But I’d argue it proves the point rather than undermining it: the ideas are mine, the lived experience is mine, the diving analogy is mine, the figs are definitely mine. What the AI provided was scaffolding. What it could not provide was the Croatian summer, the stranger’s house, the silence I had to learn to stay in, or the third point I couldn’t quite reach. Those gaps, and what lives in them, are the whole argument.
Summer 2023 felt criminally good.
It was June and I was working remotely from Croatia: swimming every afternoon, sipping cocktails with friends while watching the sunset just a five-minute walk from my beach apartment. It was everything one could want. Peaches, figs, watermelons, cherry tomatoes: endless. The nights were long. The vibes were impeccable.
But it felt criminally good for another reason, too: I had just discovered large language models.
Guest posts express the views of guest contributors and do not necessarily represent the views of the CAQDAS Networking Project or constitute an endorsement of any product, method or opinion.
At the time, I was working at Ipsos in their Ethnography Centre of Excellence. With Ipsos being a massive company with considerable resources, I was able to experiment with Ipsos Facto, the company’s proprietary, firewalled LLM. I was having the time of my life. Tasks that used to take hours suddenly took minutes. My outputs looked sharper. I was faster, more efficient, more productive. The feedback I was getting was more positive than ever, and I was flying high.
And yet – it felt slightly illegal. Very illegal, actually. I had the strange sensation that I was cheating at my job. It felt like I had discovered a dirty little shortcut nobody else knew about: something to keep hidden, because surely I was breaking some unspoken rule.
Meanwhile, in the wider world, the opposite was happening. While I was quietly worrying about being caught and reprimanded – or worse, fired – for excelling at my job, Ipsos wanted us to use Ipsos Facto. We were supposed to experiment with it.
Lo and behold: LLMs were here to stay.
Very soon after I admitted to my wonderful line manager Heidi Habsbrouck (LinkedIn her — she and the entire Ipsos Ethno team are genuinely the best) that I had, indeed, been using Ipsos Facto on my projects, I became “the AI person”. Anyone who has worked in a large corporation knows how this goes: once someone mentions out loud that you did something, even once, even tentatively, you become the go-to person for that thing. And then, whether you planned it or not, you actually do become the expert. There are worse ways to learn.
So I, along with a handful of others, became part of the internal AI cohort. Colleagues started coming to me when they wanted to understand how to use Ipsos Facto in their work. I joined a working group that ran a weekly AI clinic: a drop-in session where people could bring whatever they were working on and brainstorm with us about how the model might help. Peak career-moment energy.
Heidi and I spoke at a conference. Prokopis Christou, author of a book on AI, spotted us there and asked us to contribute a chapter on AI in ethnography. I wrote it. A professor at emlyon Business School in France saw the chapter and invited me to lecture. Prokopis reached back out and asked me to write this article. And so on and so forth. You get the picture.
But let’s go back to the summer of 2023.
At first, using Ipsos Facto, and LLMs in general, felt almost magical. You ask a question and an answer appears instantly. You paste in a messy paragraph and suddenly it’s structured. You write a vague prompt and get back something that looks surprisingly complete. You ask it what spices will balance out an overly sweet curry. You get a beautifully plotted itinerary through Uzbekistan. You describe all your vague symptoms and it reassures you that you’re probably not dying, but might have a strained ligament, and should see a doctor.
It felt wonder-full.
Apart, obviously, from the hallucinations, and the mistakes one wasn’t always aware of at first.
But, like any anthropologist worth their salt, I was fascinated. What are these things? At the heart of anthropology lies a simple but profound question: what makes us human? And what better way to explore that question than by engaging with something that appears so similar to human reasoning, and yet remains fundamentally different?
Artificial intelligence offers anthropology a particularly intriguing mirror. It resembles aspects of human cognition: language, reasoning, creativity. It seems so human-like. Seems being the crucial word here.
Fast-forward to today: Spring 2026. I use LLMs every day, of course. Like many knowledge workers, I use them to unblock thinking, organise messy notes, generate outlines when a project is still half-formed. Sometimes they help me see patterns in my own thinking that I hadn’t yet articulated; other times they help me restructure ideas I had articulated but didn’t yet know how to organise.
They’re everywhere now. You almost have to use them to keep up. My little secret is fully out in the open; and we are all, undeniably, delivering faster than ever before.
Life, in many ways, still feels criminally good. I’m back in London, working for myself. The days are getting longer again. I do Pilates in the middle of the day, go to the sauna, meet friends for long brunches. But it no longer feels criminally good because of LLMs.
If anything, it feels criminally good because their bubble has burst. Or at least, it is beginning to.
The initial hype cycle is settling. Everyone now understands that large language models are extremely useful for quick research: they can summarise, structure, and accelerate thinking. But can they conduct deep research? Can they produce genuine understanding?
Not quite. Or, at least, not yet.
And in this bursting of the bubble, I have come to a realisation: large language models are very bad at being ethnographers. Actually – let’s zoom out. They are very bad at being people-like. Because what is an ethnographer, if not an extraordinarily curious and interested person?
Some might say: well, obviously. But let me explain. I am as optimistic as they come. I am genuinely enthusiastic about technological change and curious about the ways it might reshape how we do ethnography, and how we do life. What could be more anthropological than wondering what makes us human? That question is at the core of all anthropological thinking. If there is something that thinks as fast as we do, organises information as well (honestly, better) as we do, and synthesises input as logically as we do: where does that leave us? We have long distinguished ourselves from other animals by our capacity to think, perceive, and organise. So what does the emergence of AI mean for how we understand our own humanity?
Which begs a different question: what actually makes us human? If it’s not our capacity to think, given that thinking is now also within the reach of AI (kind of), what distinguishes us from the machines? Find me a better anthropological question than that, and I’ll give you all the best figs from my grandmother’s garden. I’ll throw in some walnuts.
Again – I digress. But stay with me. If being a good ethnographer requires first being human, and if AI is human-like in the way it thinks, organises, and systemises, we might reasonably conclude that LLMs would make excellent ethnographers.
Surprise: they do not.
Two years ago, I thought they might. I was genuinely optimistic about the intersection of AI and ethnography. I even wrote a chapter in a proper academic book about it.
But humans learn, and opinions evolve.
To say that LLMs are bad at ethnography is not a complaint; it’s an observation. In many contexts, they are extraordinarily capable. They can synthesise information across vast bodies of text, produce convincing explanations, and mimic many forms of writing with impressive fluency. But when it comes to the actual practice of ethnography, something fundamental breaks down. I think it’s because ethnography, at its core, is so deeply human that LLMs simply cannot replicate it. Let me explain.
Holding the Unknown
It is nearly impossible to brainstorm with an LLM. Have you tried?
I don’t mean: give it your thoughts, have it synthesise them, and get back a dense paragraph. I mean actually bouncing unfinished ideas around. Sharing something half-formed. Going in circles until you reach the actual bottom of an idea.
As every anthropologist knows, thinking in the early stages of research rarely unfolds in a straight line. Ideas emerge only partially formed. Questions come into being before you even fully understand what those questions are, or what they should be asking. One observation leads to another, which leads to a doubt, which generates a question that may not yet be the right question, and so on.
Brainstorming in this early phase is not primarily about producing answers. It’s about creating a space in which uncertainty can exist and gradually develop into more meaningful lines of inquiry. Ethnographic interpretation, as Geertz (1973) famously argued, unfolds through sustained engagement with context and meaning, not through immediate analytical closure. You want to sit with the unknown. Get comfortable with it. Love it. Be okay with not knowing so that, eventually, you can have a breakthrough.
When you try to brainstorm with an LLM, a completely different dynamic emerges.
LLMs love completing the thought. They take your beautiful, unfinished idea, full of potential and open possibilities, and transform it into something complete, structured, resolved. They convert exploratory thinking into something that resembles a finished report, before the thinking itself has had time to fully mature.
At first, this can seem helpful. The efficiency is hard to ignore. But over time, something begins to feel slightly off. You realise you’re not really brainstorming anymore. The uncertainty, so essential to ethnography, and to genuine thinking in general, is not allowed to breathe. It gets resolved prematurely, rushing towards coherence and closure before it’s ready.
This is the very thing that sits uneasily with ethnographic research. Every anthropologist knows the moment when insight emerged precisely because familiar assumptions stopped making sense. Agar (1994) describes these as “breakdowns”: moments when researchers encounter something that doesn’t fit their existing frameworks and must sit with the uncertainty long enough for new interpretations to emerge.
Ethnographic thinking rarely seeks immediate resolution because it knows that progress lives in uncertainty. It depends on a sustained and restless attentiveness to the complexities of human life, and to the inevitable ambiguities of interpretation. As Ingold (2014) suggests, anthropology is fundamentally concerned with cultivating different forms of attention to the world, and making space for the unexpected.
LLMs, by contrast, are designed to resolve uncertainty. They produce continuation where there was chaos, and coherence where there was noise. They move quickly towards closure, towards result, towards completeness before your exploratory, half-formed thinking has had the chance to become something.
They do not hold space for uncertainty. We do. And that is one of the things that makes us human – and makes us far better ethnographers than they will ever be.
A Billion Questions
There is a particular kind of discomfort that arrives when you walk into a stranger’s house for the first time as an ethnographer. You don’t know this person. They don’t know you. And yet you have asked to spend time with them: to watch them, to ask them things, to be let into the texture of their ordinary life. The first twenty minutes are always slightly strange. You can feel them trying to perform normality, and you trying not to perform interest. Nobody is quite themselves yet.
This is, in my experience, both the worst and the best part of being an ethnographer.
The temptation, especially early in your career, is to fill the strangeness. To ask another question, move things along, give the conversation somewhere to go. But ethnographic practice teaches you to resist this. The silence is not dead time. It is where the person across from you is deciding whether to trust you. And if you can stay in it, if you can breathe into the awkwardness rather than out of it, something shifts. Slowly, almost imperceptibly, the conversation becomes real. You stop being a researcher and a subject, and start being, in some provisional and temporary way, something closer to friends.
I learned this the hard way. For a long time, I was terrible at it. My instinct, whenever a conversation dipped into quiet, was to rescue it. To jump in with the next question, the next prompt, anything to keep things moving. It took years of fieldwork to understand that I was the one making it worse. The silence wasn’t the problem. My discomfort with it was.
Now I count to ten. I sing a song in my head. Every time a person finishes their sentence, I hold the silence, I create the space, I let them be. I wait for what comes next. Not the polished answer they prepared before I arrived, but the real one, the one that surfaces only once they’ve decided the quiet is safe.
I always think of it like diving. When you’re new to it, staying at the bottom feels like something you have to force: you hold yourself down, you resist the water, you try. And the harder you try, the faster you drift back up. The only way to stay is to stop fighting it. To breathe differently. To let your body trust the depth.
Ethnographic fieldwork is like that, but with awkwardness instead of water.
Large language models are, in this sense, congenitally bad at fieldwork. Not because they lack information. They have more than any ethnographer ever could. But because they are built entirely around the next best thing. The most plausible continuation. The helpful response. They are, structurally, incapable of sitting in the discomfort of not-yet. They cannot breathe into the silence. They can only fill it.
What gets lost when silence is always filled is not just atmosphere. It is the thing the person was about to say before they decided whether they trusted you enough to say it. It is the hesitation that tells you the first answer wasn’t quite the real one. It is the moment, and every ethnographer has felt this, when someone glances away and then back, and you know that what comes next is the thing that actually matters.
LLMs are extraordinarily good at organising what people have already said. They can synthesise, categorise, find patterns across hundreds of responses. But ethnography is not primarily about what people say. It is about what they mean, and what they almost say, and what they say with their hands, and what they stop themselves from saying. It is a practice of empathy as much as analysis — and empathy, unlike analysis, cannot be optimised.
This is, I think, what lies beneath all of it. Ethnography requires you to be genuinely affected by the people you spend time with. To find them interesting not as data points but as humans. To care, a little, whether they feel comfortable in the silence. LLMs can simulate the language of care. But they cannot feel the weight of it, and people, it turns out, can always tell the difference.
In Conclusion – Or Something Like It
So here is what I have come to think, three years on from that Croatian summer.
Large language models are not bad at ethnography because they lack the right tools, or the right training data, or because the technology isn’t advanced enough yet. They are bad at ethnography because ethnography is, at its deepest level, a practice of being human. It requires the capacity to sit with not-knowing: to let a question remain open long enough to become something richer. It requires the ability to breathe into an awkward silence rather than out of it. It requires empathy, and wonder, and the willingness to be genuinely affected by the people you’re with. These are not features that can be added in the next model update.
LLMs move toward closure. They complete. They resolve. They produce the most plausible next thing. That is, genuinely, a remarkable capability; and I rely on it every day. But ethnography is precisely the refusal to do that. It is the practice of staying in the uncomfortable space before the answer arrives, because that space is where the real understanding lives.
Which brings me back to the question I keep returning to: if AI can “think”, organise, and synthesise (sometimes better than we can) what is it that makes us human? I think the answer, or at least part of it, lives here. In the capacity to be uncertain and stay anyway. In the willingness to wait in silence and see what comes. In the ability to walk into a stranger’s house and slowly, carefully, become something like friends.
These are not weaknesses that AI will eventually overcome. They are the thing itself.
I should say: I wanted to write a third point. I had the shape of it in my head: something about the body, about physical presence, about the fact that ethnography happens in a room and not on a screen. But when I sat down to write it, I couldn’t find it. The idea kept drifting just out of reach, the way the bottom does when you’re trying too hard to stay there.
And so I let it go. I am choosing to be imperfect here: to be human about it. This essay is two-thirds of what I thought it would be, and I am making my peace with that. Perhaps that is its own small argument.
I also want to be honest about something else. Three years ago I wrote a chapter, a proper academic chapter, in a proper published book, arguing for the exciting potential of AI in ethnography. I was optimistic, maybe naively so. And now I’ve written this. So who knows: check back with me in 2028 and I may have reversed myself entirely, floating back up to the surface, having tried too hard to stay down.
That, at least, would be very human of me.
=============================================================
Deana Kotiga is an independent cultural consultant, strategist and researcher specialising in visual ethnography and helping brands speak to their audiences with authenticity rather than assumptions. With experience across academic, public, and commercial sectors, she brings deep cultural insight to complex social questions. Her work blends qualitative research, cross-cultural analysis, visual insight, and the thoughtful use of generative AI to uncover how people live, see, and make meaning. Through her consultancy, Deana helps brands, organisations and services navigate culture, meaning, and change with nuance, sensitivity, and clarity.
============================================================
References
Agar, M. (1994). Language Shock: Understanding the Culture of Conversation. New York: William Morrow.
Geertz, C. (1973). The Interpretation of Cultures. New York: Basic Books.
Ingold, T. (2014). Anthropology and/as Education. London: Routledge.