File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Article: Computing low-rank approximations of large-scale matrices with the Tensor Network randomized SVD

TitleComputing low-rank approximations of large-scale matrices with the Tensor Network randomized SVD
Authors
KeywordsCurse of dimensionality
Low-rank tensor approximation
Matrix factorization
Matrix product operator
Randomized algorithm
Issue Date2018
PublisherSociety for Industrial and Applied Mathematics. The Journal's web site is located at http://www.siam.org/journals/simax.php
Citation
SIAM Journal on Matrix Analysis and Applications, 2018, v. 39 n. 3, p. 1221-1244 How to Cite?
AbstractWe propose a new algorithm for the computation of a singular value decomposition (SVD) low-rank approximation of a matrix in the matrix product operator (MPO) format, also called the tensor train matrix format. Our tensor network randomized SVD (TNrSVD) algorithm is an MPO implementation of the randomized SVD algorithm that is able to compute dominant singular values and their corresponding singular vectors. In contrast to the state-of-the-art tensor-based alternating least squares SVD (ALS-SVD) and modified alternating least squares SVD (MALS-SVD) matrix approximation methods, TNrSVD can be up to 13 times faster while achieving better accuracy. In addition, our TNrSVD algorithm also produces accurate approximations in particular cases where both ALS-SVD and MALS-SVD fail to converge. We also propose a new algorithm for the fast conversion of a sparse matrix into its corresponding MPO form, which is up to 509 times faster than the standard tensor train SVD method while achieving machine precision accuracy. The efficiency and accuracy of both algorithms are demonstrated in numerical experiments.
Persistent Identifierhttp://hdl.handle.net/10722/261769
ISSN
2017 Impact Factor: 1.682
2015 SCImago Journal Rankings: 2.052
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorBatselier, K-
dc.contributor.authorYu, W-
dc.contributor.authorDaniel, L-
dc.contributor.authorWong, N-
dc.date.accessioned2018-09-28T04:47:34Z-
dc.date.available2018-09-28T04:47:34Z-
dc.date.issued2018-
dc.identifier.citationSIAM Journal on Matrix Analysis and Applications, 2018, v. 39 n. 3, p. 1221-1244-
dc.identifier.issn0895-4798-
dc.identifier.urihttp://hdl.handle.net/10722/261769-
dc.description.abstractWe propose a new algorithm for the computation of a singular value decomposition (SVD) low-rank approximation of a matrix in the matrix product operator (MPO) format, also called the tensor train matrix format. Our tensor network randomized SVD (TNrSVD) algorithm is an MPO implementation of the randomized SVD algorithm that is able to compute dominant singular values and their corresponding singular vectors. In contrast to the state-of-the-art tensor-based alternating least squares SVD (ALS-SVD) and modified alternating least squares SVD (MALS-SVD) matrix approximation methods, TNrSVD can be up to 13 times faster while achieving better accuracy. In addition, our TNrSVD algorithm also produces accurate approximations in particular cases where both ALS-SVD and MALS-SVD fail to converge. We also propose a new algorithm for the fast conversion of a sparse matrix into its corresponding MPO form, which is up to 509 times faster than the standard tensor train SVD method while achieving machine precision accuracy. The efficiency and accuracy of both algorithms are demonstrated in numerical experiments.-
dc.languageeng-
dc.publisherSociety for Industrial and Applied Mathematics. The Journal's web site is located at http://www.siam.org/journals/simax.php-
dc.relation.ispartofSIAM Journal on Matrix Analysis and Applications-
dc.rights© 2018 Society for Industrial and Applied Mathematics. First Published in SIAM Journal on Matrix Analysis and Applications in volume 39, issue 3, published by the Society for Industrial and Applied Mathematics (SIAM).-
dc.subjectCurse of dimensionality-
dc.subjectLow-rank tensor approximation-
dc.subjectMatrix factorization-
dc.subjectMatrix product operator-
dc.subjectRandomized algorithm-
dc.titleComputing low-rank approximations of large-scale matrices with the Tensor Network randomized SVD-
dc.typeArticle-
dc.identifier.emailBatselier, K: kbatseli@HKUCC-COM.hku.hk-
dc.identifier.emailWong, N: nwong@eee.hku.hk-
dc.identifier.authorityWong, N=rp00190-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.1137/17M1140480-
dc.identifier.scopuseid_2-s2.0-85053602057-
dc.identifier.hkuros292467-
dc.identifier.volume39-
dc.identifier.issue3-
dc.identifier.spage1221-
dc.identifier.epage1244-
dc.identifier.isiWOS:000453716400008-
dc.publisher.placeUnited States-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats