Show simple item record

dc.contributor.advisorWang, Li
dc.contributor.advisorLi, Ren-Cang
dc.creatorBian, Ce
dc.date.accessioned2023-09-27T16:30:38Z
dc.date.available2023-09-27T16:30:38Z
dc.date.created2023-08
dc.date.issued2023-08-15
dc.date.submittedAugust 2023
dc.identifier.urihttp://hdl.handle.net/10106/31740
dc.description.abstractOver the past few years, the size of data dimensions or features has been increasing in various fields of science and engineering, owing to the rapid pace of data collection and the development of more advanced storage methods. However, to handle high-dimensional data, dimensionality reduction is essential before performing classification or regression tasks to eliminate noisy features. There are several numerical methods available for reducing data dimensionality, such as Canonical Correlation Analysis (CCA), Principal Component Analysis (PCA), and Linear Discriminant Analysis (LDA). While these methods offer valuable approaches to data dimensionality reduction, they do come with certain limitations. CCA, for instance, primarily focuses on finding correlations between two sets of variables, which might not fully capture the complexities of intricate relationships within multidimensional data. PCA, while excellent at preserving variance, can struggle to emphasize class separability when applied to classification tasks. Acknowledging these limitations, this thesis introduces an innovative supervised dimensionality reduction algorithm that tackles both the reduction of data dimensionality and the concurrent classification of the data. Unlike conventional methods, this algorithm embarks on the dual task of revealing the projection matrix for dimension reduction alongside identifying the classifier hyperplane for data classification. The result is a model that excels in both accuracy and efficiency, enabled by its simultaneous learning of low-dimensional representation and classification models. What distinguishes this proposed model is its versatility. It accommodates not only the dimensionality reduction and classification of single-view data but also extends its prowess to multi-view data. Through numerical simulations, the effectiveness and computational efficiency of the proposed model are showcased when contrasted against state-of-the-art methods in dimensionality reduction and classification. A noteworthy feature of this novel approach is its capacity to generate two classifiers in tandem. This unique attribute widens its applicability across diverse classification experiments encompassing a variety of data types. In effect, the method’s dual-classifier capability amplifies its utility and establishes it as a versatile choice for tackling complex classification challenges.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.subjectDimensionality reduction
dc.subjectMulti-view learning
dc.subjectSubspace learning
dc.subjectClassification
dc.titleA Novel Regularized Orthonormalized Partial Least Squares Model for Multi-view Learning
dc.typeThesis
dc.date.updated2023-09-27T16:30:38Z
thesis.degree.departmentMathematics
thesis.degree.grantorThe University of Texas at Arlington
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy in Mathematics
dc.type.materialtext


Files in this item

Thumbnail


This item appears in the following Collection(s)

Show simple item record