Task-adaptive Label Dependency Transfer for Few-shot Named Entity Recognition
Citations Over TimeTop 25% of 2023 papers
Abstract
Named Entity Recognition (NER), as a crucial subtask in natural language processing (NLP), suffers from limited labeled samples (a.k.a. few-shot). Meta-learning methods are widely used for few-shot NER, but these existing methods overlook the importance of label dependency for NER, resulting in suboptimal performance. However, applying meta-learning methods to label dependency learning faces a special challenge, that is, due to the discrepancy of label sets in different domains, the label dependencies can not be transferred across domains. In this paper, we propose the Task-adaptive Label Dependency Transfer (TLDT) method to make label dependency transferable and effectively adapt to new tasks by a few samples. TLDT improves the existing optimization-based meta-learning methods by learning general initialization and individual parameter update rule for label dependency. Extensive experiments show that TLDT achieves significant improvement over the state-of-the-art methods.
Related Papers
- → Reducing Neural Network Parameter Initialization Into an SMT Problem (Student Abstract)(2021)2 cited
- → A New Initialization Method for Neural Networks with Weight Sharing(2021)2 cited
- → Remarks on the initialization of Caputo derivative(2012)4 cited
- The Distributed Initialization Algorithm Based on Known n MSs(2004)
- → Comparison of Random Weight Initialization to New Weight Initialization CONEXP(2020)