High performance RDMA-based design of HDFS over InfiniBand
Citations Over TimeTop 1% of 2012 papers
Abstract
Hadoop Distributed File System (HDFS) acts as the primary storage of Hadoop and has been adopted by reputed organizations (Facebook, Yahoo! etc.) due to its portability and fault-tolerance. The existing implementation of HDFS uses Javasocket interface for communication which delivers suboptimal performance in terms of latency and throughput. For dataintensive applications, network performance becomes key component as the amount of data being stored and replicated to HDFS increases. In this paper, we present a novel design of HDFS using Remote Direct Memory Access (RDMA) over InfiniBand via JNI interfaces. Experimental results show that, for 5GB HDFS file writes, the new design reduces the communication time by 87% and 30% over 1Gigabit Ethernet (1GigE) and IP-over-InfiniBand (IPoIB), respectively, on QDR platform (32Gbps). For HBase, the Put operation performance is improved by 26% with our design. To the best of our knowledge, this is the first design of HDFS over InfiniBand networks.
Related Papers
- → High performance RDMA-based MPI implementation over InfiniBand(2003)354 cited
- → High Performance RDMA-Based MPI Implementation over InfiniBand(2004)160 cited
- → High performance RDMA-based MPI implementation over InfiniBand(2003)8 cited
- → LW-RDMA(2015)
- → D-RDMALib: InfiniBand-based RDMA Library for Distributed Cluster Applications(2023)