https://doi.org/10.1140/epjds/s13688-025-00600-7
Research
Cognitive networks identify AI biases on societal issues in Large Language Models
1
CogNosco Lab, Department of Psychology and Cognitive Science, University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
2
Department of Psychology and Cognitive Science, University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
3
Center for Behavioural and Implementation Science Interventions, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, 117597, Singapore, SG, Singapore
4
Department of Sociology and Social Research, University of Trento, Via Verdi, 26, 38122, Trento, TN, Italy
a
This email address is being protected from spambots. You need JavaScript enabled to view it.
b
This email address is being protected from spambots. You need JavaScript enabled to view it.
Received:
24
July
2025
Accepted:
14
November
2025
Published online:
18
December
2025
Millions of people use Large Language Models (LLMs) to research information about complex topics related to societal issues. As a result, LLMs might be influencing large worldwide audiences in ways that remain unexplored with empirical data. To address this data gap, this study introduces and analyses SociaLLMisinformation: a dataset of 33,000 English and Italian LLM-generated texts on societal issues like climate change, global warming and health misinformation. Texts were mined from OpenAI’s GPT 3.5 and GPT 4o, Meta’s Llama 3 and Llama 3.1, Anthropic’s Claude 3’s Haiku, Mistral and LLaMAntino. We investigate LLMs’ framings in regard to these societal topics, through an interpretable computational framework based on textual forma mentis networks (TFMNs), i.e., networks of syntactic/semantic associations between concepts in texts. Using TFMNs, we extract LLMs’ linguistic and affective biases present in the SociaLLMisinformation texts. Our findings reveal that the analysed LLMs adopt distinct communication styles and pronoun usage, even when prompted identically. All the models tend to have a strong positivity bias, possibly downplaying seriousness and importance of complex and sensitive topics. This work provides both a new dataset and a novel analytical approach, highlighting the need for transparent, network-based methods to monitor and mitigate LLM biases as these models become central tools for retrieving information.
Key words: LLMs / Climate change / Global warming / Health misinformation / Machine psychology / Machine bias
© The Author(s) 2025
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

