Document Type

Article

Abstract

Foundation models have emerged as powerful tools, exhibiting extraordinary performance across various tasks, such as language processing, visual recognition, code generation, and human-centered engagement. However, recent studies have highlighted their limitations when grounded, abstract, and generalized reasoning capabilities are required. Complex tasks often involve multiple hierarchical reasoning steps, which are typical features of human thinking processes. In fact, in this chapter we claim that cognitively-inspired computational models, such as the so-called Common Model of Cognition, are key to enable complex reasoning within foundation model-based artificial intelligence (AI) systems. We investigate neurosymbolic approaches for mapping AI system components to those of the Common Model of Cognition, either fully or partially. Specifically, two pathways are explored: (i) Given a task and its solution, we explore the effect of fine-tuning foundation models on the output traces obtained through a cognitive architecture such as ACT-R. The hypothesis is that, after fine-tuning, the foundation model will more closely emulate the cognitive reasoning processes necessary to solve the specific task. (ii) In the second approach, given a task, we explore mapping the solution requirements to various components of the common model of cognition and invoke a combination of foundation model-based pattern recognition, external knowledge augmentation and control flow planning to facilitate cognitive reasoning for the task. The chapter covers the background of foundation models and the common model of cognition, a survey of the existing landscape in integrating foundation models and cognitive architectures, and a discussion of insights from preliminary implementations of the two neurosymbolic pathways across real-world and synthetic tasks.

Rights

© 2024, The Authors

APA Citation

Roy, K., Wu, S., & Oltramari, A. (2024). Neurosymbolic cognitive methods for enhancing foundation model-based reasoning. [Preprint]

Share

COinS