Document Type
Conference Proceeding
Abstract
Fine-tuning pre-trained foundational language models (FLM) for specific tasks is often impractical, especially for resource-constrained devices. This necessitates the development of a Lifelong Learning (L3) framework that continuously adapts to a stream of Natural Language Processing (NLP) tasks efficiently. We propose an approach that focuses on extracting meaningful representations from unseen data, constructing a structured knowledge base, and improving task performance incrementally. We conducted experiments on various NLP tasks to validate its effectiveness, including benchmarks like GLUE and SuperGLUE. We measured good performance across the accuracy, training efficiency, and knowledge transfer metrics. Initial experimental results show that the proposed L3 ensemble method increases the model accuracy 4%∼36% compared to the finetuned FLM. Furthermore, L3 model outperforms naive fine-tuning approaches while maintaining competitive or superior performance (up to 15.4% increase in accuracy) compared to the state-of-the-art language model (T5) for the given task, STS benchmark.
Digital Object Identifier (DOI)
Publication Info
Preprint version Young Researchers Symposium, CODS-COMAD 2024, 2023.
© Shiri, Roy, Sheth, & Gaur | ACM. 2023. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in CODS-COMAD '24: Proceedings of the 7th Joint International Conference on Data Science & Management of Data (11th ACM IKDD CODS and 29th COMAD), https://doi.org/10.1145/3632410.3632494.
APA Citation
Shiri, A., Roy, K., Sheth, A., & Gaur, M. (2023). L3 ensembles: Lifelong learning approach for ensemble of foundational language models*. Young Researchers Symposium, CODS-COMAD 2024. [Preprint]