Performance Evaluation of Deep Learning Tools in Docker Containers
Citations Over TimeTop 10% of 2017 papers
Abstract
With the success of deep learning techniques in a broad range of application domains, many deep learning software frameworks have been developed and are being updated frequently to adapt to new hardware features and software libraries, which bring a big challenge for end users and system administrators. To address this problem, container techniques are widely used to simplify the deployment and management of deep learning software. However, it remains unknown whether container techniques bring any performance penalty to deep learning applications. The purpose of this work is to systematically evaluate the impact of docker container on the performance of deep learning applications. We first benchmark the performance of system components (IO, CPU and GPU) in a docker container and the host system and compare the results to see if there's any difference. According to our results, we find that computational intensive jobs, either running on CPU or GPU, have small overhead indicating docker containers can be applied to deep learning programs. Then we evaluate the performance of some popular deep learning tools deployed in a docker container and the host system. It turns out that the docker container will not cause noticeable drawbacks while running those deep learning tools. So encapsulating deep learning tool in a container is a feasible solution. © 2017 IEEE.
Related Papers
- → Serving Away From Home: How Deployments Influence Reenlistment(2002)19 cited
- → Understanding deployment from the perspective of those who have served(2016)5 cited
- → Research on Accelerating Application Technology of Centralized ERP System Based on HANA(2019)1 cited
- → Large-Scale Deployment of Tablet Computers in Brazilian Public Schools: Decisive Factors and an Implementation Model(2017)1 cited
- Theoretical Analysis of the Benchmark for Choosing Manipulative Instruments of Monetary Policies(2009)