닫기
216.73.216.191
216.73.216.191
close menu
KCI 등재 SCIE SCOPUS
Empirical Performance Evaluation of Communication Libraries for Multi-GPU based Distributed Deep Learning in a Container Environment
( Hyeonseong Choi ) , ( Youngrang Kim ) , ( Jaehwan Lee ) , ( Yoonhee Kim )
UCI I410-ECN-0102-2022-500-000583449

Recently, most cloud services use Docker container environment to provide their services. However, there are no researches to evaluate the performance of communication libraries for multi-GPU based distributed deep learning in a Docker container environment. In this paper, we propose an efficient communication architecture for multi-GPU based deep learning in a Docker container environment by evaluating the performances of various communication libraries. We compare the performances of the parameter server architecture and the All-reduce architecture, which are typical distributed deep learning architectures. Further, we analyze the performances of two separate multi-GPU resource allocation policies ― allocating a single GPU to each Docker container and allocating multiple GPUs to each Docker container. We also experiment with the scalability of collective communication by increasing the number of GPUs from one to four. Through experiments, we compare OpenMPI and MPICH, which are representative open source MPI libraries, and NCCL, which is NVIDIA’s collective communication library for the multi-GPU setting. In the parameter server architecture, we show that using CUDA-aware OpenMPI with multi-GPU per Docker container environment reduces communication latency by up to 75%. Also, we show that using NCCL in All-reduce architecture reduces communication latency by up to 93% compared to other libraries.

1. Introduction
2. Background
3. Related Works
4. Experimental Environment
5. Experimental Results for the Parameter Server Architecture
6. Experimental Results for the All-reduce Architecture
7. Summary of Experimental Results
8. Conclusion
References
[자료제공 : 네이버학술정보]
×