Advancements in Cognitive Architectures for Language Models

The field of language models is moving towards incorporating cognitive architectures to enhance their reasoning and decision-making capabilities. Researchers are exploring the use of foundational frameworks from intelligence theory, such as Guilford's Structure of Intellect model, to improve the clarity, coherence, and adaptability of model responses. Additionally, there is a growing interest in developing frameworks for evaluating the artificial cognitive capabilities of large language models, with a focus on robustness and systematic evaluation. Noteworthy papers in this area include the proposal of a novel cognitive prompting approach, which leverages the Structure of Intellect model to enforce systematic reasoning in language models. Another notable paper presents a framework for creating high-quality benchmarks for automatic evaluation of language models, which has shown strong correlations with human rankings. Overall, these developments have the potential to significantly advance the field of language models and enable more effective automation of complex tasks.

Sources

Cognitive Prompts Using Guilford's Structure of Intellect Model

Cyborg Data: Merging Human with AI Generated Training Data

Zero-shot Benchmarking: A Framework for Flexible and Scalable Automatic Evaluation of Language Models

AI Judges in Design: Statistical Perspectives on Achieving Human Expert Equivalence With Vision-Language Models

BlenderGym: Benchmarking Foundational Model Systems for Graphics Editing

Language Models at the Syntax-Semantics Interface: A Case Study of the Long-Distance Binding of Chinese Reflexive ziji

A Framework for Robust Cognitive Evaluation of LLMs

Built with on top of