Turing: A language for flexible probabilistic inference
Citations Over Time
Abstract
Probabilistic programming is becoming an attractive approach to probabilistic machine learning. Through relieving researchers from the tedious burden of hand-deriving inference algorithms, not only does it enable the development of more accurate and interpretable models but it also encourages reproducible research. However, successful probabilistic programming systems require flexible, generic and efficient inference engines. In this work, we present a system called Turing for flexible composable probabilistic programming inference. Turing has an intuitive modelling syntax and supports a wide range of sampling-based inference algorithms. Most importantly, Turing inference is composable: it combines Markov chain sampling operations on subsets of model variables, e.g. using a combination of a Hamiltonian Monte Carlo (HMC) engine and a particle Gibbs (PG) engine. This composable inference engine allows the user to easily switch between black-box style inference methods such as HMC and customized inference methods. Our aim is to present Turing and its composable inference engines to the community and encourage other researchers to build on this system to help advance the field of probabilistic machine learning.
Related Papers
- → Stan: A Probabilistic Programming Language(2017)7,165 cited
- → Julia: A Fresh Approach to Numerical Computing(2017)6,106 cited
- The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo(2014)
- → Declarative Probabilistic Programming with Datalog(2014)180 cited
- → A new approach to probabilistic programming inference(2014)215 cited