ATTENTION: The works hosted here are being migrated to a new repository that will consolidate resources, improve discoverability, and better show UTA's research impact on the global community. We will update authors as the migration progresses. Please see MavMatrix for more information.
Show simple item record
dc.contributor.advisor | Wang, Li | |
dc.contributor.advisor | Li, Ren-Cang | |
dc.creator | Bian, Ce | |
dc.date.accessioned | 2023-09-27T16:30:38Z | |
dc.date.available | 2023-09-27T16:30:38Z | |
dc.date.created | 2023-08 | |
dc.date.issued | 2023-08-15 | |
dc.date.submitted | August 2023 | |
dc.identifier.uri | http://hdl.handle.net/10106/31740 | |
dc.description.abstract | Over the past few years, the size of data dimensions or features has been increasing in various fields of science and engineering, owing to the rapid pace of data collection and the development of more advanced storage methods. However, to handle high-dimensional data, dimensionality reduction is essential before performing classification or regression tasks to eliminate noisy features. There are several numerical methods available for reducing data dimensionality, such as Canonical Correlation Analysis (CCA), Principal Component Analysis (PCA), and Linear Discriminant Analysis (LDA). While these methods offer valuable approaches to data dimensionality reduction, they do come with certain limitations. CCA, for instance, primarily focuses on finding correlations between two sets of variables, which might not fully capture the complexities of intricate relationships within multidimensional data. PCA, while excellent at preserving variance, can struggle to emphasize class separability when applied to classification tasks.
Acknowledging these limitations, this thesis introduces an innovative supervised dimensionality reduction algorithm that tackles both the reduction of data dimensionality and the concurrent classification of the data. Unlike conventional methods, this algorithm embarks on the dual task of revealing the projection matrix for dimension reduction alongside identifying the classifier hyperplane for data classification. The result is a model that excels in both accuracy and efficiency, enabled by its simultaneous learning of low-dimensional representation and classification models.
What distinguishes this proposed model is its versatility. It accommodates not only the dimensionality reduction and classification of single-view data but also extends its prowess to multi-view data. Through numerical simulations, the effectiveness and computational efficiency of the proposed model are showcased when contrasted against state-of-the-art methods in dimensionality reduction and classification.
A noteworthy feature of this novel approach is its capacity to generate two classifiers in tandem. This unique attribute widens its applicability across diverse classification experiments encompassing a variety of data types. In effect, the method’s dual-classifier capability amplifies its utility and establishes it as a versatile choice for tackling complex classification challenges. | |
dc.format.mimetype | application/pdf | |
dc.language.iso | en_US | |
dc.subject | Dimensionality reduction | |
dc.subject | Multi-view learning | |
dc.subject | Subspace learning | |
dc.subject | Classification | |
dc.title | A Novel Regularized Orthonormalized Partial Least Squares Model for Multi-view Learning | |
dc.type | Thesis | |
dc.date.updated | 2023-09-27T16:30:38Z | |
thesis.degree.department | Mathematics | |
thesis.degree.grantor | The University of Texas at Arlington | |
thesis.degree.level | Doctoral | |
thesis.degree.name | Doctor of Philosophy in Mathematics | |
dc.type.material | text | |
Files in this item
- Name:
- BIAN-DISSERTATION-2023.pdf
- Size:
- 9.785Mb
- Format:
- PDF
This item appears in the following Collection(s)
Show simple item record