Performance of linear solvers in tensor-train format on current multicore architectures

Melven Röhrig-Zöllner*, Manuel Becklas, Jonas Thies, Achim Basermann

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

137 Downloads (Pure)

Abstract

Tensor networks are a class of algorithms aimed at reducing the computational complexity of high-dimensional problems. They are used in an increasing number of applications, from quantum simulations to machine learning. Exploiting data parallelism in these algorithms is key to using modern hardware. However, there are several ways to map required tensor operations onto linear algebra routines (“building blocks”). Optimizing this mapping impacts the numerical behavior, so computational and numerical aspects must be considered hand-in-hand. In this paper we discuss the performance of solvers for low-rank linear systems in the tensor-train format (also known as matrix-product states). We consider three popular algorithms: TT-GMRES, MALS, and AMEn. We illustrate their computational complexity based on the example of discretizing a simple high-dimensional PDE in, for example, 5010 grid points. This shows that the projection to smaller sub-problems for MALS and AMEn reduces the number of floating-point operations by orders of magnitude. We suggest optimizations regarding orthogonalization steps, singular value decompositions, and tensor contractions. In addition, we propose a generic preconditioner based on a TT-rank-1 approximation of the linear operator. Overall, we obtain roughly a 5× speedup over the reference algorithm for the fastest method (AMEn) on a current multicore CPU.

Original languageEnglish
Pages (from-to)443-461
Number of pages19
JournalInternational Journal of High Performance Computing Applications
Volume39
Issue number3
DOIs
Publication statusPublished - 2025

Keywords

  • linear solvers
  • Low-rank tensor algorithms
  • matrix-product states
  • node-level performance
  • tensor-train format

Fingerprint

Dive into the research topics of 'Performance of linear solvers in tensor-train format on current multicore architectures'. Together they form a unique fingerprint.

Cite this