SLICE: Supersense-based Lightweight Interpretable Contextual Embeddings
Citations Over Time
Abstract
Contextualised embeddings such as BERT have become de facto state-of-the-art references in many NLP applications, thanks to their impressive performances. However, their opaqueness makes it hard to interpret their behaviour. SLICE is a hybrid model that combines supersense labels with contextual embeddings. We introduce a weakly supervised method to learn interpretable embeddings from raw corpora and small lists of seed words. Our model is able to represent both a word and its context as embeddings into the same compact space, whose dimensions correspond to interpretable supersenses. We assess the model in a task of supersense tagging for French nouns. The little amount of supervision required makes it particularly well suited for low-resourced scenarios. Thanks to its interpretability, we perform linguistic analyses about the predicted supersenses in terms of input word and context representations.
Related Papers
- → ML interpretability: Simple isn't easy(2024)29 cited
- → When consumers need more interpretability of artificial intelligence (AI) recommendations? The effect of decision-making domains(2023)7 cited
- → Measures of Model Interpretability for Model Selection(2018)10 cited
- → Measuring Interpretability for Different Types of Machine Learning Models(2018)15 cited
- → ML Interpretability: Simple Isn't Easy(2022)2 cited