Document Type

Article

Abstract

Domain-specific language understanding requires integrating multiple pieces of relevant contextual information. For example, we see both suicide and depression related behavior (multiple contexts) in the text “I have a gun and feel pretty bad about my life, and it wouldn’t be the worst thing if I didn’t wake up tomorrow”. Domain specificity in self-attention architectures is handled by fine-tuning on excerpts from relevant domain specific resources (datasets and external knowledge - medical textbook chapters on mental health diagnosis related to suicide and depression). We propose a modified self-attention architecture Knowledge infused Self Attention Transformer (KSAT) that achieves the integration of multiple domain-specific contexts through the use of external knowledge sources. KSAT introduces knowledge-guided biases in dedicated self-attention layers for each knowledge source to accomplish this. In addition, KSAT provides mechanics for controlling the trade-off between learning from data and learning from knowledge. Our quantitative and qualitative evaluations show that (1) the KSAT architecture provides novel human-understandable ways to precisely measure and visualize the contributions of the infused domain contexts, and (2) KSAT performs competitively with other knowledge-infused baselines and significantly outperforms baselines that use fine-tuning for domain-specific tasks

APA Citation

Roy, K., Zi, Y., Narayanan, V., Gaur, M., & Sheth, Amit. (2022). KSAT: Knowledge-infused Self Attention Transformer - Integrating Multiple Domain-Specific Contexts. [Preprint]

Share

COinS