Extracting Syntactic Trees from Transformer Encoder Self-Attentions
2018pp. 347–349
Citations Over TimeTop 10% of 2018 papers
Abstract
This is a work in progress about extracting the sentence tree structures from the encoder’s self-attention weights, when translating into another language using the Transformer neural network architecture. We visualize the structures and discuss their characteristics with respect to the existing syntactic theories and annotations.
Related Papers
- → Romans 12.4–8: One Sentence or Two?(2006)2 cited
- The Pragmatic Analysis of the Subject of "Bei"——sentence in Dunuang Bianwen(2010)
- Design and Realization of RS Encoder Based on FPGA(2009)
- → A(2023)