Application-Transparent Checkpoint/Restart for MPI Programs over InfiniBand
Citations Over TimeTop 10% of 2006 papers
Abstract
Ultra-scale computer clusters with high speed interconnects, such as InfiniBand, are being widely deployed for their excellent performance and cost effectiveness. However, the failure rate on these clusters also increases along with their augmented number of components. Thus, it becomes critical for such systems to be equipped with fault tolerance support. In this paper, we present our design and implementation of checkpoint/restart framework for MPI programs running over InfiniBand clusters. Our design enables low-overhead, application-transparent checkpointing. It uses coordinated protocol to save the current state of the whole MPI job to reliable storage, which allows users to perform rollback recovery if the system runs into faulty states later. Our solution has been incorporated into MVAPICH2, an open-source high performance MPI-2 implementation over InfiniBand. Performance evaluation of this implementation has been carried out using NAS benchmarks, HPL benchmark, and a real-world application called GROMACS. Experimental results indicate that in our design, the overhead to take checkpoints is low, and the performance impact for checkpointing applications periodically is insignificant. For example, time for checkpointing GROMACS is less than 0.3% of the execution time, and its performance only decreases by 4% with checkpoints taken every minute. To the best of our knowledge, this work is the first report of checkpoint/restart support for MPI over InfiniBand clusters in the literature
Related Papers
- → High performance RDMA-based MPI implementation over InfiniBand(2003)354 cited
- → High Performance RDMA-Based MPI Implementation over InfiniBand(2004)160 cited
- → High performance RDMA-based MPI implementation over InfiniBand(2003)8 cited
- → LW-RDMA(2015)
- → D-RDMALib: InfiniBand-based RDMA Library for Distributed Cluster Applications(2023)