Web-scale k-means clustering
2010pp. 1177–1178
Citations Over TimeTop 10% of 2010 papers
Abstract
We present two modifications to the popular k-means clustering algorithm to address the extreme requirements for latency, scalability, and sparsity encountered in user-facing web applications. First, we propose the use of mini-batch optimization for k-means clustering. This reduces computation cost by orders of magnitude compared to the classic batch algorithm while yielding significantly better solutions than online stochastic gradient descent. Second, we achieve sparsity with projected gradient descent, and give a fast ε-accurate projection onto the L1-ball. Source code is freely available: http://code.google.com/p/sofia-ml
Related Papers
- → Stochastic Gradient Descent(2015)47 cited
- → Training a Two-Layer ReLU Network Analytically(2023)7 cited
- → Accelerating Extreme Search Based on Natural Gradient Descent with Beta Distribution(2021)4 cited
- → Computational Complexity of Gradient Descent Algorithm(2021)3 cited
- → Training Neural Networks Using Predictor-Corrector Gradient Descent(2018)