Lasagne: First release.
Citations Over Time
Abstract
core contributors, in alphabetical order: Eric Battenberg (@ebattenberg) Sander Dieleman (@benanne) Daniel Nouri (@dnouri) Eben Olson (@ebenolson) Aäron van den Oord (@avdnoord) Colin Raffel (@craffel) Jan Schlüter (@f0k) Søren Kaae Sønderby (@skaae) extra contributors, in chronological order: Daniel Maturana (@dimatura): documentation, cuDNN layers, LRN Jonas Degrave (@317070): get_all_param_values() fix Jack Kelly (@JackKelly): help with recurrent layers Gábor Takács (@takacsg84): support broadcastable parameters in lasagne.updates Diogo Moitinho de Almeida (@diogo149): MNIST example fixes Brian McFee (@bmcfee): MaxPool2DLayer fix Martin Thoma (@MartinThoma): documentation Jeffrey De Fauw (@JeffreyDF): documentation, ADAM fix Michael Heilman (@mheilman): NonlinearityLayer, lasagne.random Gregory Sanders (@instagibbs): documentation fix Jon Crall (@erotemic): check for non-positive input shapes Hendrik Weideman (@hjweide): set_all_param_values() test, MaxPool2DCCLayer fix Kashif Rasul (@kashif): ADAM simplification Peter de Rivaz (@peterderivaz): documentation fix
Related Papers
- → Long Short-Term Memory(1997)94,983 cited
- → ImageNet classification with deep convolutional neural networks(2017)75,550 cited
- Dropout: a simple way to prevent neural networks from overfitting(2014)
- → Static Analysis of Shape in TensorFlow Programs(2016)1,671 cited
- → Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift(2024)15,639 cited