Label-Attentive Hierarchical Attention Network for Text Classification
Citations Over Time
Abstract
The performance of text classification largely hinges on features extraction from text through representation learning. Recently, many deep learning models have been proved to be the state-of-the-art method for text classification. However, those approaches always ignore exploiting global clues, such as label information, which result in partial semantic loss. This paper proposes a text classification framework called Label-Attentive Hierarchical Attention Network (LAHAN). Firstly, to exploit global clues better and avoid semantic loss between text and label information, LAHAN generates label-attentive embedding by introducing the joint features of text words and labels information. Then a hierarchical architecture is adopted to utilize the label information on both word level and sentence level, which brings improvement to extract a better hierarchical text representation. Finally, the classification layer of our model predicts the category based on the full-text label-attentive representations. Experimental results on the several benchmark datasets demonstrate that our proposed method achieves improvement on accuracy compared to other state-of-the-art baselines.
Related Papers
- → AEG: Automatic Exploit Generation(2018)209 cited
- → Exploit Kits: The production line of the Cybercrime economy?(2015)33 cited
- → PExy: The Other Side of Exploit Kits(2014)24 cited
- → AEMB: An Automated Exploit Mitigation Bypassing Solution(2021)5 cited
- Evaluation of Two Host-Based Intrusion Prevention Systems(2005)