Show simple item record

dc.contributor.advisorMadani, Ramtin
dc.creatorAdil, Muhammad
dc.date.accessioned2022-01-25T18:22:26Z
dc.date.available2022-01-25T18:22:26Z
dc.date.created2021-12
dc.date.issued2021-12-16
dc.date.submittedDecember 2021
dc.identifier.urihttp://hdl.handle.net/10106/30218
dc.description.abstractMany real world problems from various application areas such as engineering, finance and operation research can be cast as optimization problems. Generally, the goal is to optimize an objective function under a set of constraints. Traditionally, convex optimization problems are solved by an interior point method (IPM). Interior point methods proved to achieve high accuracy for moderate size problems. However, the computation cost of iterations of these iterative algorithms grows non-linearly with the dimension of the problem. Although interior-point methods are robust and theoretically sound, they do not scale well for very large conic optimization programs. Computational cost, memory issues, and incompatibility with distributed platforms are among the major impediments for interior point methods in solving large-scale and practical conic optimization programs. The rapid growth of problem size in application areas such as power systems, finance, signal processing and machine learning motivated researchers to develop computationally efficient optimization solvers. In recent years, first orders methods have received a particular attention for solving large convex optimization problems. Various optimization solvers based on first order numerical algorithms have been developed in the past decade. Although first order methods provide low accuracy solutions, but inexpensive iterations and low computational cost makes them attractive mathematical tools for handling large-scale problems. One of the major shortcomings of first order methods to achieve a higher accuracy is their slow tail convergence behavior. The first part of this work is an attempt to remedy the problem of slow convergence for first-order numerical algorithms by proposing an adaptive conditioning heuristic policy. First, a parallelizable numerical algorithm is proposed that is capable of dealing with large-scale conic programs on distributed platforms such as graphics processing unit (GPU) with orders-of-magnitude time improvement. The mathematical proof for global convergence of proposed numerical algorithm is provided. In the past decade, several preconditioning methods have been applied to improve the condition number and convergence of first order methods. Diagonal preconditioning and matrix equilibration techniques are most commonly used for this purpose. In contrary to the existing techniques, in this work, it is argued that the condition number of the problem data is not a reliable predictor of convergence speed. In light of this observation, an adaptive conditioning heuristic is proposed which enables higher accuracy compared to other first-order numerical algorithms. A wide range of experiments are conducted on a variety of large-scale linear programming and second-order cone programming problems to demonstrate the scalability and computational advantages of the proposed algorithm compared to commercial and open-source solvers. Solving the linear system is probably the most computationally expensive part in first order methods. The existing methods rely on direct and indirect methods for solving the linear systems of equations. Direct methods rely on factorization techniques which usually destroy the sparsity structure of original sparse problems and hence become computationally prohibitive. Alternatively, indirect methods are iterative and various preconditioning variants of indirect or iterative methods have been studied in the literature to improve accuracy, but again the preconditioners do not necessarily retain the sparsity patterns of original problems. In the second part of this work, a matrix-free first order approach is proposed for solving large-scale sparse conic optimization problems. This method is based on an easy-to-compute decomposition of large sparse matrices into two factors. The proposed numerical algorithm is based on matrix-free decomposition and alternating direction method of multipliers. The iterations of the designed algorithm are computationally cheap, highly parallelizable and enjoy closed form solutions. The algorithm can easily be implemented on distributed platforms such as graphics processing units with orders-of-magnitude time improvements. The performance of the proposed algorithm is demonstrated on a variety of conic problems and the performance gain is compared with competing first-order solvers.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.subjectNumerical algorithms
dc.subjectConic optimization
dc.subjectFirst order methods
dc.subjectSparse problems
dc.subjectAdaptive conditioning
dc.subjectMatrix-free algorithms
dc.titleFast and Parallelizable Numerical Algorithms for Large Scale Conic Optimization Problems
dc.typeThesis
dc.degree.departmentElectrical Engineering
dc.degree.nameDoctor of Philosophy in Electrical Engineering
dc.date.updated2022-01-25T18:22:26Z
thesis.degree.departmentElectrical Engineering
thesis.degree.grantorThe University of Texas at Arlington
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy in Electrical Engineering
dc.type.materialtext


Files in this item

Thumbnail


This item appears in the following Collection(s)

Show simple item record