Competition-level code generation with AlphaCode
Citations Over TimeTop 10% of 2022 papers
Abstract
Programming is a powerful and ubiquitous problem-solving tool. Systems that can assist programmers or even generate programs themselves could make programming more productive and accessible. Recent transformer-based neural network models show impressive code generation abilities yet still perform poorly on more complex tasks requiring problem-solving skills, such as competitive programming problems. Here, we introduce AlphaCode, a system for code generation that achieved an average ranking in the top 54.3% in simulated evaluations on recent programming competitions on the Codeforces platform. AlphaCode solves problems by generating millions of diverse programs using specially trained transformer-based networks and then filtering and clustering those programs to a maximum of just 10 submissions. This result marks the first time an artificial intelligence system has performed competitively in programming competitions.
Related Papers
- → The Spirit of Ghost Code(2014)15 cited
- → Proving Reachability Using FShell(2012)6 cited
- Multi-stage Programming for Mainstream Languages(2009)
- → A New Method of Extracting Structured Meanings from Natural Language Texts and Its Application(2014)1 cited
- → Predicting logical and temporal properties of real-time systems using Synchronized Elementary Nets(1994)