by Jinse Kwon on 2019-09-19 16:06:25
Date : 2019. 09. 04 (Wed) 13:30
Locate : EB5. 533
Presenter : Jinse Kwon
Title : Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
Author : Tal Ben-Nun, Torsten Hoefler
(ETH Zurich, Zurich, Switzerland)
Abstract : Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to low-level circuit design. In this survey, we describe the problem from a theoretical perspective, followed by approaches for its parallelization. We present trends in DNN architectures and the resulting implications on parallelization strategies. We then review and model the different types of concurrency in DNNs: from the single operator, through parallelism in network inference and training, to distributed deep learning. We discuss asynchronous stochastic optimization, distributed system architectures, communication schemes, and neural architecture search. Based on those approaches, we extrapolate potential directions for parallelism in deep learning.
Keynote video : 2018 Swiss HPC Conference
Article source: //eslab.cnu.ac.kr/en/Mobile/165-Demystifying-Parallel-and-Distributed-Deep-Learning-An-In-Depth-Concurrency-Analysis.html
This information is added to the end of each article. These fields are optional. If filled, these values would appear by default for your articles. Sure, you are able to specify custom values for each article.