A Multi-lingual Multi-task Architecture for Low-resource Sequence Labeling
Citations Over TimeTop 1% of 2018 papers
Abstract
We propose a multi-lingual multi-task architecture to develop supervised models with a minimal amount of labeled data for sequence labeling. In this new architecture, we combine various transfer models using two layers of parameter sharing. On the first layer, we construct the basis of the architecture to provide universal word representation and feature extraction capability for all models. On the second level, we adopt different parameter sharing strategies for different transfer schemes. This architecture proves to be particularly effective for low-resource settings, when there are less than 200 training sentences for the target task. Using Name Tagging as a target task, our approach achieved 4.3%-50.5% absolute Fscore gains compared to the mono-lingual single-task baseline model.
Related Papers
- → The origins and early development of the psychological contract construct(1997)231 cited
- → The Construct(2009)12 cited
- Review of journal of construction of modern and contemporary times in China(2005)
- The Generic Cabling System of Construct(2005)
- → The Development of Leader-Member Exchange Construct and the Emergence of Algorithmic Leader-Member Exchange Construct in the Gig Economy(2023)