Fast and Accurate Tensor Completion with Total Variation Regularized Tensor Trains

Ching Yun Ko*, Kim Batselier, Luca Daniel, Wenjian Yu, Ngai Wong

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

34 Citations (Scopus)
148 Downloads (Pure)

Abstract

We propose a new tensor completion method based on tensor trains. The to-be-completed tensor is modeled as a low-rank tensor train, where we use the known tensor entries and their coordinates to update the tensor train. A novel tensor train initialization procedure is proposed specifically for image and video completion, which is demonstrated to ensure fast convergence of the completion algorithm. The tensor train framework is also shown to easily accommodate Total Variation and Tikhonov regularization due to their low-rank tensor train representations. Image and video inpainting experiments verify the superiority of the proposed scheme in terms of both speed and scalability, where a speedup of up to 155\times is observed compared to state-of-the-art tensor completion methods at a similar accuracy. Moreover, we demonstrate the proposed scheme is especially advantageous over existing algorithms when only tiny portions (say, 1%) of the to-be-completed images/videos are known.

Original languageEnglish
Pages (from-to)6918-6931
JournalIEEE Transactions on Image Processing
Volume29
DOIs
Publication statusPublished - 2020

Bibliographical note

Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care

Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • image restoration
  • Tensor completion
  • tensor-train decomposition
  • total variation

Fingerprint

Dive into the research topics of 'Fast and Accurate Tensor Completion with Total Variation Regularized Tensor Trains'. Together they form a unique fingerprint.

Cite this