Towards hypergraph cognitive networks as feature-rich models of knowledge
Institute of Information Science and Technologies “A. Faedo” (ISTI), National Research Council (CNR), Via G. Moruzzi, 1, Pisa, Italy
2 Computational Cognitive Science Lab, University of Melbourne, Melbourne, Australia
3 Department of Psychology and Cognitive Science, University of Trento, Corso Bettini, 84, Trento, Italy
Accepted: 31 July 2023
Published online: 16 August 2023
Conceptual associations influence how human memory is structured: Cognitive research indicates that similar concepts tend to be recalled one after another. Semantic network accounts provide a useful tool to understand how related concepts are retrieved from memory. However, most current network approaches use pairwise links to represent memory recall patterns (e.g. reading “airplane” makes one think of “air” and “pollution”, and this is represented by links “airplane”-“air” and “airplane”-“pollution”). Pairwise connections neglect higher-order associations, i.e. relationships between more than two concepts at a time. These higher-order interactions might covariate with (and thus contain information about) how similar concepts are along psycholinguistic dimensions like arousal, valence, familiarity, gender and others. We overcome these limits by introducing feature-rich cognitive hypergraphs as quantitative models of human memory where: (i) concepts recalled together can all engage in hyperlinks involving also more than two concepts at once (cognitive hypergraph aspect), and (ii) each concept is endowed with a vector of psycholinguistic features (feature-rich aspect). We build hypergraphs from word association data and use evaluation methods from machine learning features to predict concept concreteness. Since concepts with similar concreteness tend to cluster together in human memory, we expect to be able to leverage this structure. Using word association data from the Small World of Words dataset, we compared a pairwise network and a hypergraph with concepts/nodes. Interpretable artificial intelligence models trained on (1) psycholinguistic features only, (2) pairwise-based feature aggregations, and on (3) hypergraph-based aggregations show significant differences between pairwise and hypergraph links. Specifically, our results show that higher-order and feature-rich hypergraph models contain richer information than pairwise networks leading to improved prediction of word concreteness. The relation with previous studies about conceptual clustering and compartmentalisation in associative knowledge and human memory are discussed.
Key words: Cognitive networks / Free associations / Feature-rich networks / Hypergraphs
© Springer-Verlag GmbH, DE 2023
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.