Download PDFOpen PDF in browser

Algorithm Design for Tensor Units

EasyChair Preprint no. 6442

14 pagesDate: August 27, 2021


To respond to the intense computational load of deep neural networks, a plethora of domain-specific architectures have been introduced, such as Google Tensor Processing Units and NVIDIA Tensor Cores. A common feature of these architectures is a hardware circuit for efficiently computing a dense matrix multiplication of a given small size. In order to broaden the class of algorithms that exploit these systems, we propose a computational model, named the TCU model, that captures the ability to natively multiply small matrices. We then use the TCU model for designing fast algorithms for several problems, including matrix operations (dense and sparse multiplication, Gaussian Elimination), graph algorithms (transitive closure, all pairs shortest distances), Discrete Fourier Transform, stencil computations, integer multiplication, and polynomial evaluation. We finally highlight a relation between the TCU model and the external memory model.

Keyphrases: computational model, efficient algorithms, external memory, graph problems, Hardware Accelerators, linear algebra, Tensor Core

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Rezaul Chowdhury and Francesco Silvestri and Flavio Vella},
  title = {Algorithm Design for Tensor Units},
  howpublished = {EasyChair Preprint no. 6442},

  year = {EasyChair, 2021}}
Download PDFOpen PDF in browser