Summarizing Source Code using a Neural Attention Model
Citations Over TimeTop 1% of 2016 papers
Abstract
High quality source code is often paired with high level summaries of the computation it performs, for example in code documentation or in descriptions posted in online forums. Such summaries are extremely useful for applications such as code search but are expensive to manually author, hence only done for a small fraction of all code that is produced. In this paper, we present the first completely datadriven approach for generating high level summaries of source code. Our model, CODE-NN , uses Long Short Term Memory (LSTM) networks with attention to produce sentences that describe C# code snippets and SQL queries. CODE-NN is trained on a new corpus that is automatically collected from StackOverflow, which we release. Experiments demonstrate strong performance on two tasks:
Related Papers
- Multilingual Summarization Evaluation without Human Models(2010)
- → Experiences with and Reflections on Text Summarization Tools(2009)9 cited
- On the Applications of the Experience Summarization in Modern Teaching and Research(2000)
- → Dynamic Summarization: Another Stride Towards Summarization(2007)