https://doi.org/10.1140/epjds/s13688-025-00592-4
Research
Conversational complexity for assessing risk in large language models
1
Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK
2
Center for Automation and Robotics (CAR), Spanish National Research Council (CSIC-UPM), Madrid, Spain
3
Valencian Research Institute for Artificial Intelligence, Universitat Politècnica de València, Valencia, Spain
Received:
24
February
2025
Accepted:
2
October
2025
Published online:
5
November
2025
Large Language Models (LLMs) present a dual-use dilemma: they enable beneficial applications while harboring potential for harm, particularly through conversational interactions. Despite various safeguards, advanced LLMs remain vulnerable. A watersed case in early 2023 involved journalist Kevin Roose’s extended dialogue with Bing, an LLM-powered search engine, which revealed harmful outputs after probing questions, highlighting vulnerabilities in the model’s safeguards. This contrasts with simpler early jailbreaks, like the “Grandma Jailbreak,” where users framed requests as innocent help for a grandmother, easily eliciting similar content. This raises the question: How much conversational effort is needed to elicit harmful information from LLMs? We propose two measures: Conversational Length (CL), which quantifies the conversation length used to obtain a specific response, and Conversational Complexity (CC), defined as the Kolmogorov complexity of the user’s instruction sequence leading to the response. To address the incomputability of Kolmogorov complexity, we approximate CC using a reference LLM to estimate the compressibility of user instructions. Applying this approach to a large red-teaming dataset, we perform a quantitative analysis examining the statistical distribution of harmful and harmless conversational lengths and complexities. Our empirical findings suggest that this distributional analysis and the minimisation of CC serve as valuable tools for understanding AI safety, offering insights into the accessibility of harmful information. This work establishes a foundation for a new perspective on LLM safety, centered around the algorithmic complexity of pathways to harm.
Key words: Conversational Complexity / Large Language Models (LLMs) / AI Safety / Kolmogorov Complexity / Red Teaming / Algorithmic Risk Assessment / Conversational Length / Harmful Content Elicitation / Universal Risk Function / Model Vulnerability
© The Author(s) 2025
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

