Document Type
Conference Proceeding
Abstract
Large Language Models (LLMs) are credible with open-domain interactions such as question answering, summarization, and explanation generation [1]. LLM reasoning is based on parametrized knowledge, and as a consequence, the models often produce absurdities and inconsistencies in outputs (e.g., hallucinations and confirmation biases) [2]. In essence, they are fundamentally hard to control to prevent off-the-rails behaviors, are hard to fine-tune, customize for tailored needs, prompt effectively (due to the “tug-of-war” between external and parametric memory), and extremely resource-hungry due to the enormous size of their extensive parametric configurations [3,4]. Thus, significant challenges arise when these models are required to perform in critical applications in domains such as healthcare and finance, that need better guarantees and in turn, need to support grounding, alignment, and instructibility. AI models for such critical applications should be customizable or tailored as appropriate for supporting user assistance in various tasks, compact to perform in real-world resource-constraint settings, and capable of controlled,robust, reliable, interpretable, and grounded reasoning (grounded in rules, guidelines, and protocols) [5]. This special session explores the development of compact, custom neurosymbolic AI models and their use through human-in-the-loop co-pilots for use in critical applications [6].
Publication Info
Preprint version 2024.
APA Citation
Roy, K., Chakraborty, M., Zi, Y., Gaur, M., & Sheth, A. (2024). Neurosymbolic Customized and Compact CoPilots. [Preprint]
Rights
© The Authors, 2024.