File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

postgraduate thesis: Probabilistic tensor subspace learning : foundations and innovations

TitleProbabilistic tensor subspace learning : foundations and innovations
Authors
Advisors
Advisor(s):Wu, YC
Issue Date2017
PublisherThe University of Hong Kong (Pokfulam, Hong Kong)
Citation
Cheng, L. [程磊]. (2017). Probabilistic tensor subspace learning : foundations and innovations. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.
AbstractThis world is full of data, and these data often appear in high-dimensional structures, in which each object is described by multiple attributes. To make sense of the multi-dimensional data, advanced computational tools are needed to figure out the hidden patterns underlying the data. This is where tensor sub- space learning comes into play. Tensor subspace learning is an emerging topic that studies the extraction of low dimensional but yet fundamental information from multi-dimensional data. Due to the advances in algorithms for tensor sub- space learning in the past decade, and its proven superior performance to the traditional matrix-based counterpart, tensor subspace learning finds many ap- plications in diverse fields of engineering, including (but not limited to) wireless communications, image and video signal processing. However, most current research overlooks an important problem: the tensor rank determination. Tensor rank refers to the subspace dimension of the tensor. In some cases, it can be obtained from problem-specific domain knowledge, but most of the time, it is unknown and must be estimated. As it has been shown that tensor rank is non-deterministic polynomial-time hard (NP-hard) to acquire from tensor data, a dominant approach in using existing tensor subspace learning algorithms is to run multiple parallel algorithms assuming different ranks, and then choose the model with the smallest rank that fits the data well. Although this trial-and-error approach is widely accepted in tensor research community, it inevitably leads to heavy computation burden. To tackle this problem, tensor subspace learning is investigated in this thesis from a probabilistic perspective. The new perspective enjoys the advantage that tensor rank determination can be fully integrated as part of the subspace learn- ing problem, and the Bayes’ rule provides a natural recipe for automatic rank determination. Besides this, a number of other innovative features have been incorporated into the proposed probabilistic tensor subspace learning algorithm, including the ability to handle complex-valued data, mitigate outliers in measure- ments, and dealing with orthogonal constraints in factor matrices for performance improvement. Numerical studies in a variety of applications have demonstrated the effectiveness of the proposed algorithm in terms of accuracy and robustness. While the probabilistic tensor subspace learning developed in the first part of this thesis promises a bright future, it is designed for batch-mode operation, meaning that the algorithm requires the gathering of whole data set before it starts. This makes it no longer competent in modern big data era. To devise a scalable probabilistic tensor subspace learning algorithm, we firstly proposes a general class of probabilistic model termed the multilayer partial conjugate ex- ponential family (MPCEF) model. It is revealed that, under the MPCEF model, the updates of the inference algorithm are all in closed-forms with exponential family parameterization. This not only greatly simplifies the algorithm deriva- tions for any application falling into this general family of probabilistic model, but also paves the way to bridge the probabilistic inference and the stochastic optimization. By introducing idea of the stochastic optimization, a novel scalable tensor subspace learning algorithm is developed in the second part of the thesis.
DegreeDoctor of Philosophy
SubjectComputer algorithms
Machine learning
Dept/ProgramElectrical and Electronic Engineering
Persistent Identifierhttp://hdl.handle.net/10722/265341

 

DC FieldValueLanguage
dc.contributor.advisorWu, YC-
dc.contributor.authorCheng, Lei-
dc.contributor.author程磊-
dc.date.accessioned2018-11-29T06:22:20Z-
dc.date.available2018-11-29T06:22:20Z-
dc.date.issued2017-
dc.identifier.citationCheng, L. [程磊]. (2017). Probabilistic tensor subspace learning : foundations and innovations. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR.-
dc.identifier.urihttp://hdl.handle.net/10722/265341-
dc.description.abstractThis world is full of data, and these data often appear in high-dimensional structures, in which each object is described by multiple attributes. To make sense of the multi-dimensional data, advanced computational tools are needed to figure out the hidden patterns underlying the data. This is where tensor sub- space learning comes into play. Tensor subspace learning is an emerging topic that studies the extraction of low dimensional but yet fundamental information from multi-dimensional data. Due to the advances in algorithms for tensor sub- space learning in the past decade, and its proven superior performance to the traditional matrix-based counterpart, tensor subspace learning finds many ap- plications in diverse fields of engineering, including (but not limited to) wireless communications, image and video signal processing. However, most current research overlooks an important problem: the tensor rank determination. Tensor rank refers to the subspace dimension of the tensor. In some cases, it can be obtained from problem-specific domain knowledge, but most of the time, it is unknown and must be estimated. As it has been shown that tensor rank is non-deterministic polynomial-time hard (NP-hard) to acquire from tensor data, a dominant approach in using existing tensor subspace learning algorithms is to run multiple parallel algorithms assuming different ranks, and then choose the model with the smallest rank that fits the data well. Although this trial-and-error approach is widely accepted in tensor research community, it inevitably leads to heavy computation burden. To tackle this problem, tensor subspace learning is investigated in this thesis from a probabilistic perspective. The new perspective enjoys the advantage that tensor rank determination can be fully integrated as part of the subspace learn- ing problem, and the Bayes’ rule provides a natural recipe for automatic rank determination. Besides this, a number of other innovative features have been incorporated into the proposed probabilistic tensor subspace learning algorithm, including the ability to handle complex-valued data, mitigate outliers in measure- ments, and dealing with orthogonal constraints in factor matrices for performance improvement. Numerical studies in a variety of applications have demonstrated the effectiveness of the proposed algorithm in terms of accuracy and robustness. While the probabilistic tensor subspace learning developed in the first part of this thesis promises a bright future, it is designed for batch-mode operation, meaning that the algorithm requires the gathering of whole data set before it starts. This makes it no longer competent in modern big data era. To devise a scalable probabilistic tensor subspace learning algorithm, we firstly proposes a general class of probabilistic model termed the multilayer partial conjugate ex- ponential family (MPCEF) model. It is revealed that, under the MPCEF model, the updates of the inference algorithm are all in closed-forms with exponential family parameterization. This not only greatly simplifies the algorithm deriva- tions for any application falling into this general family of probabilistic model, but also paves the way to bridge the probabilistic inference and the stochastic optimization. By introducing idea of the stochastic optimization, a novel scalable tensor subspace learning algorithm is developed in the second part of the thesis.-
dc.languageeng-
dc.publisherThe University of Hong Kong (Pokfulam, Hong Kong)-
dc.relation.ispartofHKU Theses Online (HKUTO)-
dc.rightsThe author retains all proprietary rights, (such as patent rights) and the right to use in future works.-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subject.lcshComputer algorithms-
dc.subject.lcshMachine learning-
dc.titleProbabilistic tensor subspace learning : foundations and innovations-
dc.typePG_Thesis-
dc.description.thesisnameDoctor of Philosophy-
dc.description.thesislevelDoctoral-
dc.description.thesisdisciplineElectrical and Electronic Engineering-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.5353/th_991044014366703414-
dc.date.hkucongregation2018-
dc.identifier.mmsid991044014366703414-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats