Minimal Cognitive Grid: New Results and Applications / 1
CENTAUROMACHY AND AI
10 months ago Nature published a paper entitled “A foundation model to predict and capture human cognition” showcasing a model called Centaur proposed as a possibile candidate for a unified theory of cognition since it was able to replicate, with human-comparabile performance, a number of behavioral data after being fine-tuned on a huge number of psychological test.
In the paper “Taming the Centaur(s) with LAPITHS: A Framework for a Theoretically Grounded Intepretation of AI Performances”, now out on arXiv, co-authored with @Matteo Da Pelo, @Alessio Donvito @Claudio Frongia and @Pietro Salis we present a methodological framework called Lapiths (standing for Language-model Analysis through Paradigm-grounded Interpretations of Theses about Human-likenesS, and also reminding to the name of the famous Greek tribe fighting the epic battle with the Centaurs) showing how equivalent behavioral and neural prediction performances can be simulated by RAG agents not specifically trained on human data about the task to handle.
In particular, the experimental analysis of our paper focuses on the two-step task. This a core task for the arguments developed by the Centaur’s authors, since it was used as a key element to show that their system was able not only to predict human-level behavioral performances but also to predict, for that task, the beta values and region of interests (ROI) of human fMRI data without being specifically trained on it.
We show that comparable performances for both the neural and behavioral data can be obtained by non cognitive systems like our devised RAG agents not specifically trained on human data, but only instructed with the reward scheme of task structure, on these two subtask to handle.
In addition, the LAPITHS framework adopts a mathematical formalization of the Minimal Cognitive Grid (introduced in Cognitive Design for Artificial Minds) to show how the performance alignment of AI systems with human data does not equate with a structural alignment with human mechanisms used to have similar performances.
As a consequence of this state of affairs, the paper suggests to adopt cautious claims when interpreting the human-level performances of transformer-based language models (et similia) as evidence of human-like underlying computation and, by extension, as “signs” of their cognitive abilities. In our opinion, this trend in AI research represents a behaviouristic tendency that should be avoided and toned down since it does not appear to be scientifically justified and represent an element of hype that does not make justice to the AI and Cognitive Science research agendas and development.
#minimalcognitivegrid #humanlikeAI #AI #computationalcognitivescience #cognitivedesignforartificialminds

