Mobile 120 Views

by Juwon You on 2020-10-11 15:36:02

Date: 2020. 10. 12 (Mon) 14:00

Locate: EB5. 507

Presenter: Juwon You

Title: A Coordinated Tiling and Batching Framework for Efficient GEMM on GPUs

Author: Xiuhong Li, Yun Liang, Shengen Yan, Liancheng Jia, Yinghan Li

(Center for Energy-efficient Computing and Applications, School of EECS, Peking University SenseTime Incorporation)

Abstract: General matrix multiplication (GEMM) plays a paramount role in a broad range of domains such as deep learning, scientific computing, and image processing. The primary optimization method is to partition the matrix into many tiles and exploit the parallelism within and between tiles. The tiling hierarchy closely mirrors the thread hierarchy on GPUs. In practice, GPUs can fully unleash its computing power only when the matrix size is large and there are sufficient number of tiles and workload for each tile. However, in many real-world applications especially deep learning domain, the matrix size is small. To this end, prior work proposes batched GEMM to process a group of small independent GEMMs together by designing a single CUDA kernel for all of these GEMMs.
However, the current support for batched GEMM is still rudimentary. Tiling and batching are tightly correlated. A large tile size can increase the data reuse, but it will decrease the thread-level parallelism, which further decrease the optimization space for the batching. A small tile size can increase the thread-level parallelism and then provide larger optimization space for the batching, but at the cost of sacrificing data reuse. In this paper, we propose a coordinated tiling and batching framework for accelerating GEMMs on GPUs. It is a two-phase framework, which consists of a tiling engine and a batching engine to perform efficient batched GEMM on GPUs. Tiling engine partitions the GEMMs into independent tiles and batching engine assigns the tiles to thread blocks. Moreover, we propose a general programming interface for the coordinated tiling and batching solution. Finally, experiment evaluation results on synthetic batched GEMM cases show that our framework can achieve about 1.40X performance speedup on average over the state-of-the art technique. We also use GoogleNet as a real-world case study and our framework can achieve 1.23X speedup.

Paper : PPoP 2019

Article source: //