The field of language models is moving towards incorporating cognitive architectures to enhance their reasoning and decision-making capabilities. Researchers are exploring the use of foundational frameworks from intelligence theory, such as Guilford's Structure of Intellect model, to improve the clarity, coherence, and adaptability of model responses. Additionally, there is a growing interest in developing frameworks for evaluating the artificial cognitive capabilities of large language models, with a focus on robustness and systematic evaluation. Noteworthy papers in this area include the proposal of a novel cognitive prompting approach, which leverages the Structure of Intellect model to enforce systematic reasoning in language models. Another notable paper presents a framework for creating high-quality benchmarks for automatic evaluation of language models, which has shown strong correlations with human rankings. Overall, these developments have the potential to significantly advance the field of language models and enable more effective automation of complex tasks.
Advancements in Cognitive Architectures for Language Models
Sources
Zero-shot Benchmarking: A Framework for Flexible and Scalable Automatic Evaluation of Language Models
AI Judges in Design: Statistical Perspectives on Achieving Human Expert Equivalence With Vision-Language Models