Mobile 250 Views

by Jinse Kwon on 2019-06-12 11:18:17

Date : 2019. 06. 26 (Wed) 13:00

Locate : EB5. 533

Presenter : Jinse Kwon

 

Title : PipeDream: Fast and Efficient Pipeline Parallel DNN Training

Author : Aaron HarlapDeepak NarayananAmar PhanishayeeVivek SeshadriNikhil DevanurGreg GangerPhil Gibbons

(Microsoft Research, Carnegie Mellon University, Stanford University)

 

Abstract : PipeDream is a Deep Neural Network(DNN) training system for GPUs that parallelizes computation by pipelining execution across multiple machines. Its pipeline parallel computing model avoids the slowdowns faced by data-parallel training when large models and/or limited network bandwidth induce high communication-to-computation ratios. PipeDream reduces communication by up to 95% for large DNNs relative to data-parallel training, and allows perfect overlap of communication and computation. PipeDream keeps all available GPUs productive by systematically partitioning DNN layers among them to balance work and minimize communication, versions model parameters for backward pass correctness, and schedules the forward and backward passes of different inputs in round-robin fashion to optimize "time to target accuracy". Experiments with five different DNNs on two different clusters show that PipeDream is up to 5x faster in time-to-accuracy compared to data-parallel training.

 

Poster  : SYSML 2018.

Full Paper : Arxiv

Article source: //eslab.cnu.ac.kr/en/Mobile/161-PipeDream-Fast-and-Efficient-Pipeline-Parallel-DNN-Training.html

Other

General