Cognitive Abilities of Large Language Models

The field of artificial intelligence is moving towards a deeper understanding of the cognitive abilities of large language models (LLMs). Researchers are exploring the possibility that LLMs can be considered as full-blown linguistic and cognitive agents, possessing understanding, beliefs, desires, knowledge, and intentions. This shift in perspective is driven by the observation that LLMs are able to perform complex tasks such as answering questions, making suggestions, and learning from experience. The development of metacognitive abilities in LLMs is also a key area of research, with implications for human-AI collaboration and the creation of more trustworthy artificial systems. Furthermore, the use of LLMs as theoretical tools to study human cognition is gaining traction, with studies investigating the link between forward-pass dynamics in Transformers and real-time human processing. Noteworthy papers in this area include:

  • A philosophical defense of AI cognition, which argues that LLMs possess the full suite of cognitive states.
  • A study on metacognition and uncertainty communication in humans and LLMs, which highlights the importance of attending to the differences between human and LLM metacognitive capacities.
  • Research on linking forward-pass dynamics in Transformers and real-time human processing, which suggests that Transformer processing and human processing may be facilitated or impeded by similar properties of an input stimulus.

Sources

Going Whole Hog: A Philosophical Defense of AI Cognition

Metacognition and Uncertainty Communication in Humans and Large Language Models

Linking forward-pass dynamics in Transformers and real-time human processing

Cooperative Speech, Semantic Competence, and AI

Built with on top of