CacheSack: Theory and Experience of Google’s Admission Optimization for Datacenter Flash Caches
Citations Over Time
Abstract
This article describes the algorithm, implementation, and deployment experience of CacheSack, the admission algorithm for Google datacenter flash caches. CacheSack minimizes the dominant costs of Google’s datacenter flash caches: disk IO and flash footprint. CacheSack partitions cache traffic into disjoint categories, analyzes the observed cache benefit of each subset, and formulates a knapsack problem to assign the optimal admission policy to each subset. Prior to this work, Google datacenter flash cache admission policies were optimized manually, with most caches using the Lazy Adaptive Replacement Cache algorithm. Production experiments showed that CacheSack significantly outperforms the prior static admission policies for a 7.7% improvement of the total cost of ownership, as well as significant improvements in disk reads (9.5% reduction) and flash wearout (17.8% reduction).
Related Papers
- → Cache memory design: An evolving art: Designers are looking to line size, degree of associativity, and virtual addresses as important parameters in speeding up the operation(1987)16 cited
- → Energy efficient i-cache using multiple line buffers with prediction(2008)5 cited
- → A versatile data cache for trace buffer support(2013)
- Adaptive Cache Line Strategy for Irregular References on Cell Architecture(2011)
- → An Efficient Cache Grouping Strategy for Multinode Cache Networks(2015)