Abstract
In international management research, cross-cultural qualitative studies are often faced with linguistic diversity, subtle cultural nuances and contextual ambiguity, all of which can complicate data interpretation and interrater reliability. This paper proposes that Artificial Intelligence, and particularly Large Language Models can provide vital support in addressing these challenges by aiding culture-sensitive coding. We develop a step-by-step process model in which Large Language Models are integrated into the interrater reliability workflow as collaborative partners rather than replacements for human coders. By leveraging the scalability, linguistic versatility and pattern-recognition capabilities of Large Language Models, researchers can achieve greater coding consistency, reduce cognitive overload and enhance sensitivity to cultural and contextual variations in qualitative data. Illustrated through a hypothetical example of cross-cultural networking behaviour, the model demonstrates how AI-assisted coding can amplify human judgment in iterative coding cycles, offering a more robust and inclusive approach to interrater reliability. In doing so, this paper advances methodological practice by showing how human–machine collaboration can refine culture-sensitive analysis and strengthen the validity of qualitative research in international management.
| Original language | English |
|---|---|
| Pages (from-to) | 1-23 |
| Number of pages | 23 |
| Journal | European Journal of International Management |
| Volume | 28 |
| Issue number | 1 |
| DOIs | |
| Publication status | Published - 2026 |
Keywords
- AI
- artificial intelligence
- cross-cultural management
- human–machine collaboration
- intercoder reliability
- international business
- interrater reliability
- large language models
- LLMs
- qualitative coding
- qualitative research methods