Show simple item record

dc.contributor.advisorKamangar, Farhad
dc.contributor.advisorMadani, Ramtin
dc.creatorKheirandishfard, Mohsen
dc.date.accessioned2020-01-10T20:42:39Z
dc.date.available2020-01-10T20:42:39Z
dc.date.created2019-12
dc.date.issued2019-12-05
dc.date.submittedDecember 2019
dc.identifier.urihttp://hdl.handle.net/10106/28845
dc.description.abstractRepresentation learning is a fundamental task in the area of machine learning which can significantly influence the performance of the algorithms used in various applications. The main goal of this task is to capture the relationships between the input data and learn feature representations that contain the most useful information of the original data. Such representations can be further leveraged in many machine learning applications such as clustering, natural language analysis, recommender systems, etc. In this dissertation, we first present a theoretical framework for solving a broad class of non-convex optimization problems. The proposed method is applicable to various tasks involving representation learning such as discriminative dimensionality reduction and graph matching. We perform experiments on benchmark graph matching datasets to verify the effectiveness of our proposed approach in finding a near-optimal match between two graphs. Besides that, we practically corroborate the capability of deep models in extracting complex underlying relationships between the data samples in two fundamental problems: subspace clustering and domain adaptation. Toward this goal, we propose two novel deep architectures for learning informative and high-quality representations that are able to considerably boost the performance of the existing algorithms. Our experiments demonstrate the potential of our proposed architectures in achieving state-of-the-art results on well-known benchmark datasets. In the following, we give brief explanations about three different projects included in this dissertation. Convex Relaxation of Bilinear Matrix Inequalities: Many interesting machine learning problems are inherently non-convex and computationally hard to solve. We study the applicability of convex relaxation techniques on finding solutions to these problems. We develop a novel and computationally efficient convexification technique that relies on convex quadratic constraints to transform a class of non-convex problems, known as bilinear matrix inequality (BMI), into convex surrogates. Then, the solution of the surrogates can be efficiently obtained using standard convex optimization approaches. We study the theoretical aspects of the proposed convexification algorithm and investigate the conditions under which the algorithm is guaranteed to produce feasible solutions for the BMI problem. As the BMI formulation encompasses a wide range of non-convex problems, the proposed algorithm is generally applicable to many machine learning problems such as dimensionality reduction, minimum volume ellipsoid, graph matching, and matrix completion, etc. To evaluate the effectiveness of the proposed procedure, we use the idea of sequential relaxation to find the solution of the graph matching problem. To this end, we first propose a novel convex formulation for the problem and then develop a numerical algorithm based on the alternating direction method of multipliers to solve the convexified formulation. The results of our experiments on two benchmark datasets for graph matching demonstrate the potential of the proposed algorithm in finding high-quality solutions. Multi-Level Representation Learning for Deep Subspace Clustering: Subspace clustering is an unsupervised learning task with a variety of machine learning applications such as motion segmentation, face clustering, etc. The primary goal of this task is to partition a set of data samples, drawn from a union of low-dimensional subspaces, into disjoint clusters such that the samples within each cluster belong to the same subspace. This project proposes a novel deep subspace clustering approach which uses convolutional autoencoders to transform input images into new representations lying on a union of linear subspaces. The first contribution of our method is to insert multiple fully-connected linear layers between the encoder layers and their corresponding decoder layers to promote learning more favorable representations for subspace clustering. These connection layers facilitate the feature learning procedure by combining low-level and high-level information for generating multiple sets of self-expressive and informative representations at different levels of the encoder. Moreover, we introduce a novel loss minimization model which leverages an initial clustering of the samples to effectively fuse the multi-level representations and recover the underlying subspaces more accurately. The loss function is then minimized through an iterative scheme which alternatively updates the network parameters and produces a new clustering of the samples until the convergence is obtained. Our experiments on four real-world datasets demonstrate that the proposed approach exhibits superior performance compared to the state-of-the-art methods on most of the subspace clustering problems. Class Conditional Alignment for Partial Domain Adaptation: Adversarial adaptation models have demonstrated significant progress towards transferring knowledge from a labeled source dataset to an unlabeled target dataset. Partial domain adaptation (PDA) investigates the scenarios in which the source domain is large and diverse, and the target label space is a subset of the source label space. The main purpose of PDA is to identify the shared classes between the domains and promote learning transferable knowledge from these classes. In this project, we propose a multi-class adversarial architecture for PDA. The proposed approach jointly aligns the marginal and class-conditional distributions in the shared label space by minimaxing a novel multi-class adversarial loss function. Furthermore, we incorporate effective regularization terms to encourage selecting the most relevant subset of source domain classes. In the absence of target labels, the proposed approach is able to effectively learn domain-invariant feature representations, which in turn can enhance the classification performance in the target domain. Our comprehensive experiments on two benchmark datasets Office-31 and Office-Home corroborate the effectiveness of the proposed approach in addressing different partial transfer learning tasks.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.subjectDeep learning
dc.subjectMachine learning
dc.subjectSubspace clustering
dc.subjectPartial domain adaptation
dc.subjectOptimization
dc.subjectNon-convex optimization
dc.titleDeep Representation Learning for Clustering and Domain Adaptation
dc.typeThesis
dc.degree.departmentComputer Science and Engineering
dc.degree.nameDoctor of Philosophy in Computer Science
dc.date.updated2020-01-10T20:42:39Z
thesis.degree.departmentComputer Science and Engineering
thesis.degree.grantorThe University of Texas at Arlington
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy in Computer Science
dc.type.materialtext


Files in this item

Thumbnail


This item appears in the following Collection(s)

Show simple item record