Will They Like This? Evaluating Code Contributions with Language Models
Citations Over TimeTop 10% of 2015 papers
Abstract
Popular open-source software projects receive and review contributions from a diverse array of developers, many of whom have little to no prior involvement with the project. A recent survey reported that reviewers consider conformance to the project's code style to be one of the top priorities when evaluating code contributions on Github. We propose to quantitatively evaluate the existence and effects of this phenomenon. To this aim we use language models, which were shown to accurately capture stylistic aspects of code. We find that rejected change sets do contain code significantly less similar to the project than accepted ones, furthermore, the less similar change sets are more likely to be subject to thorough review. Armed with these results we further investigate whether new contributors learn to conform to the project style and find that experience is positively correlated with conformance to the project's code style.
Related Papers
- → An empirical study of the impact of modern code review practices on software quality(2015)313 cited
- → Modern code reviews in open-source projects: which problems do they fix?(2014)252 cited
- → Automatically Recommending Peer Reviewers in Modern Code Review(2015)172 cited
- → Automatic Code Review by Learning the Revision of Source Code(2019)45 cited
- → Towards demystifying dimensions of source code embeddings(2020)3 cited