Model Priming with Triplet Loss for Few-Shot Emotion Classification in Text
Abstract
Automatically detecting emotions in text data is a challenging task, especially when only little supervised training data is available. Therefore, we attempt to boost model performance in few-shot experiments by learning emotion label information. We explore triplet loss for emotion detection from text, and utilize this technique to cluster text representations that express the same emotion in the embedding space before learning the classification task. We show that this method, which we call emotion priming, outperforms baseline results, multi-task fine-tuning with cross-entropy loss, and an existing label infusion method that adds label words to the input sequence to alter the model’s attention weights. An analysis of the emotion class representations after priming shows that the observed performance gain can be attributed to the redistribution of the text representations. The results also indicate that this method is robust in datasets that contain many classes and few examples per class. In contrast to earlier work, we found that the label infusion method leads to a substantial performance decrease compared to the baseline model, especially for the datasets with more complex label schemes. Finally, we report results for zero-shot experiments with ChatGPT as a larger alternative to smaller fine-tuned language models, and show that it fails to produce accurate results, indicating the complexity of the studied task.
Related Papers
- → Study on viewer’s preference of sensibility vocabulary depending on composition of portrait shot in image -Mainly on the basis of size and placement of shot-(2021)1 cited
- → <title>High-speed photographic study on shot put</title>(1995)
- Susquehanna Chorale Spring Concert "Roots and Wings"(2017)
- → Take a Shot: Part 1(2017)
- → ИСПОЛЬЗОВAНИЕ ПОТЕНЦИAЛA СОЦИAЛЬНЫХ ПAРТНЕРОВ В ПОДГОТОВКЕ БУДУЩИХ ПЕДAГОГОВ(2024)