CAPI-Flash Accelerated Persistent Read Cache for Apache Cassandra
Citations Over TimeTop 25% of 2018 papers
Abstract
In real-world NoSQL deployments, users have to trade off CPU, memory, I/O bandwidth and storage space to achieve the required performance and efficiency goals. Data compression is a vital component to improve storage space efficiency, but reading compressed data increases response time. Therefore, compressed data stores rely heavily on using the memory as a cache to speed up read operations. However, as large DRAM capacity is expensive, NoSQL databases have become costly to deploy and hard to scale. In our work, we present a persistent caching mechanism for Apache Cassandra on a high-throughput, low-latency FPGA-based NVMe Flash accelerator (CAPI-Flash), replacing Cassandra's in-memory cache. Because flash is dramatically less expensive per byte than DRAM, our caching mechanism provides Apache Cassandra with access to a large caching layer at lower cost. The experimental results show that for read-intensive workloads, our caching layer provides up to 85% improved throughput and also reduces CPU usage by 25% compared to default Cassandra.
Related Papers
- → Location cache(2004)28 cited
- → ROBTIC: An On-chip Instruction Cache Design for Low Power Embedded Systems(2009)5 cited
- → Hybrid-way Cache for Mobile Processors(2011)3 cited
- → Buffer-controlled cache for low-power multicore systems(2016)1 cited
- Research of data cache technology based on cache-network(2004)