Citation:

Abstract:
Following the progressing internationalisation of social science research and the computational turn in the field, researchers are increasingly adopting computational text analysis (CTA) methods to compare textual data across multiple cases and languages. In these settings, it is not only the mapping between construct and measures that requires validation, but also the equivalence of this mapping across languages and cases. However, although the validation requirements in multilingual analyses exceed those in monolingual studies, current research shows that validation is often insufficiently and inconsistently addressed in comparative multilingual CTA. To support more robust comparative research, this article presents a framework for validating findings obtained from multilingual textual data. The framework outlines validation strategies for four key stages of a typical multilingual CTA workflow: corpus, input data, process, and output. It directly tackles the challenge of approaching equivalence across contexts and languages in these stages and moves beyond the common practice of identifying problems only at the final stage of research.