Show simple item record

dc.contributor.advisorKamangar, Farhad
dc.contributor.advisorMadani, Ramtin
dc.creatorZohrizadeh, Fariba
dc.date.accessioned2019-08-27T20:28:12Z
dc.date.available2019-08-27T20:28:12Z
dc.date.created2019-08
dc.date.issued2019-08-01
dc.date.submittedAugust 2019
dc.identifier.urihttp://hdl.handle.net/10106/28620
dc.description.abstractThis dissertation is concerned with modeling fundamental and challenging machine learning tasks as convex/non-convex optimization problems and designing a mechanism that could solve them in a cost and time-effective manner. Extensive theoretical and practical studies are carried out to give deeper insights into the robustness and effectiveness of the formulated problems. In what follows, we investigate some well-known tasks that frequently arise in machine learning applications. Image Segmentation: Image segmentation is a fundamental and challenging task in computer vision with diverse applications in various areas. One of the major challenges in image segmentation is to determine the optimal number of coherent regions. This dissertation develops a novel and highly parallelizable convex model which takes into account the spatial relationship between the image features and simultaneously determines the number of clusters and their associated members. To solve the model, a computationally efficient algorithm is presented based on the alternating direction method of multiplier. Extensive experiments on benchmark image segmentation datasets demonstrate that the proposed method can provide high quality and competitive results compared to the existing state-of-the-art methods. Convex Relaxation for Solving Optimization Problems with Orthogonality Constraints: A class of optimization problems with orthogonality constraints has been used to model various applications in machine learning such as discriminative dimensionality reduction, graph matching, dictionary learning, etc. Such optimization problems include nonconvex nonlinear equations, that substantively increase the computational complexity of the problems. In this dissertation, we develop a sequential approach based on parabolic relaxation which finds an orthogonal matrix that minimizes a non-convex and non-smooth objective function subject to additional quadratic constraints. We prove that under very mild assumptions, the proposed approach is guaranteed to provide a feasible solution for the original non-convex problem. The effectiveness of the proposed scheme is corroborated for the problem of discriminative dimensionality reduction and graph clustering. Convex Relaxation for Training Neural Networks: Training of a neural network is formulated as a complex optimization problem which is non-convex and inherently hard to solve. In this dissertation, we propose a novel convexification approach that reduces the training problem into solving a sequence of polynomial-time solvable convex surrogates. The proposed approach, called convexified neural network (Convex-NN), jointly estimates the network parameters of all layers and can admit a wide range of additional convex constraints. We theoretically prove that Convex-NN is guaranteed to converge under mild conditions and perform empirical experiments to corroborate the effectiveness of the method. Class Subset Selection for Partial Domain Adaptation: Deep neural networks have demonstrated superior performance in a variety of machine learning problems. These impressive achievements often rely on the availability of large amounts of labeled training data. However, in many applications, the acquisition of sufficient labeled data is difficult and time-consuming. This provides a strong motivation to reduce the labeling cost and effort by learning effective predictive models using richly-annotated datasets and transferring the knowledge to unlabeled datasets from different but related domains. This dissertation proposes an adversarial-based method for the problem of partial domain adaptation in which the source label space is a subset of the target label space. Empirical results demonstrate the high potential of the proposed approach in addressing different partial domain adaptation tasks.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.subjectDeep learning
dc.subjectMachine learning
dc.subjectNonlinear optimization
dc.subjectTransfer learning
dc.subjectImage segmentation
dc.subjectSubset selection
dc.subjectConvex relaxation
dc.titleConvex and Non-convex Optimization Methods for Machine Learning
dc.typeThesis
dc.degree.departmentComputer Science and Engineering
dc.degree.nameDoctor of Philosophy in Computer Science
dc.date.updated2019-08-27T20:30:21Z
thesis.degree.departmentComputer Science and Engineering
thesis.degree.grantorThe University of Texas at Arlington
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy in Computer Science
dc.type.materialtext


Files in this item

Thumbnail


This item appears in the following Collection(s)

Show simple item record