DGEMM using tensor cores, and its accurate and reproducible versions

Daichi Mukunoki, Katsuhisa Ozaki, Takeshi Ogita, Toshiyuki Imamura

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper proposes a method for implementing dense matrix multiplication on FP64 (DGEMM) and FP32 (SGEMM) using Tensor Cores on NVIDIA’s graphics processing units (GPUs). Tensor Cores are special processing units that perform 4×4 matrix multiplications on FP16 inputs with FP32 precision, and return the result on FP32. The proposed method adopts the Ozaki scheme, an accurate matrix multiplication algorithm based on error-free transformation for matrix multiplication. The proposed method has three prominent advantages: first, it can be built upon the cublasGemmEx routine using Tensor Core operations; second, it can achieve higher accuracy than standard DGEMM, including the correctly-rounded result; third, it ensures bit-level reproducibility even for different numbers of cores and threads. The achievable performance of the method depends on the absolute-value range of each element of the input matrices. For example, when the matrices were initialized with random numbers over a dynamic range of 1E+9, our DGEMM-equivalent implementation achieved up to approximately 980 GFlops of FP64 operation on the Titan RTX GPU (with 130 TFlops on Tensor Cores), although cublasDgemm can achieve only 539 GFlops on FP64 floating-point units. Our results reveal the possibility of utilizing hardware with limited FP32/FP64 resources and fast low-precision processing units (such as AI-oriented processors) for general-purpose workloads.

Original languageEnglish
Title of host publicationHigh Performance Computing - 35th International Conference, ISC High Performance 2020, Proceedings
EditorsPonnuswamy Sadayappan, Bradford L. Chamberlain, Guido Juckeland, Hatem Ltaief
PublisherSpringer
Pages230-248
Number of pages19
ISBN (Print)9783030507428
DOIs
Publication statusPublished - 2020
Event35th International Conference on High Performance Computing, ISC High Performance 2020 - Frankfurt, Germany
Duration: 2020 Jun 222020 Jun 25

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12151 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference35th International Conference on High Performance Computing, ISC High Performance 2020
CountryGermany
CityFrankfurt
Period20/6/2220/6/25

Keywords

  • Accuracy
  • FP16
  • GEMM
  • Half-precision
  • Linear algebra
  • Low-precision
  • Matrix multiplication
  • Reproducibility
  • Tensor cores

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint Dive into the research topics of 'DGEMM using tensor cores, and its accurate and reproducible versions'. Together they form a unique fingerprint.

  • Cite this

    Mukunoki, D., Ozaki, K., Ogita, T., & Imamura, T. (2020). DGEMM using tensor cores, and its accurate and reproducible versions. In P. Sadayappan, B. L. Chamberlain, G. Juckeland, & H. Ltaief (Eds.), High Performance Computing - 35th International Conference, ISC High Performance 2020, Proceedings (pp. 230-248). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 12151 LNCS). Springer. https://doi.org/10.1007/978-3-030-50743-5_12