A Syntactic Neural Model for General-Purpose Code Generation
2017pp. 440–450
Citations Over TimeTop 1% of 2017 papers
Abstract
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
Related Papers
- → Improving Code Summarization with Block-wise Abstract Syntax Tree Splitting(2021)65 cited
- → Homologous detection based on text, Token and abstract syntax tree comparison(2010)4 cited
- → Improving Code Summarization with Block-wise Abstract Syntax Tree Splitting(2021)5 cited
- → Syntax Analysis: The Left-Most-Derivation-and-Reduction Trees and its Compare with the LR Parsing Methods(2014)
- A Method for Parsing GCC abstract Syntax Tree(2004)