ATTENTION: The works hosted here are being migrated to a new repository that will consolidate resources, improve discoverability, and better show UTA's research impact on the global community. We will update authors as the migration progresses. Please see MavMatrix for more information.
Show simple item record
dc.contributor.advisor | Manry, Michael T. | |
dc.creator | Kheirkhah, Parastoo | |
dc.date.accessioned | 2016-10-25T19:23:20Z | |
dc.date.available | 2016-10-25T19:23:20Z | |
dc.date.created | 2016-08 | |
dc.date.issued | 2016-08-17 | |
dc.date.submitted | August 2016 | |
dc.identifier.uri | http://hdl.handle.net/10106/26120 | |
dc.description.abstract | A systematic two-step batch approach for constructing a sparse neural network is presented. Unlike other sparse neural networks, the proposed paradigm uses orthogonal least squares (OLS) to train the network. OLS based pruning is proposed to induce sparsity in the network. Based on the usefulness of the basic functions in the hidden units, the weights connecting the output to hidden units and output to input units are modified to form a sparse neural network. The proposed hybrid training algorithm has been compared with the fully connected MLP and sparse softmax classifier that uses second order training algorithm. The simulation results show that the proposed algorithm has significant improvement in terms of convergence speed, network size, generalization and ease of training over fully connected MLP. Analysis of the proposed training algorithm on various linear and non-linear data files is carried out. The ability of the proposed algorithm is further substantiated by clearly differentiating two separate data sets when feed into the proposed algorithm. The experimental results are reported using 10-fold cross validation. Inducing sparsity into a fully connected neural network, pruning of the hidden units, Newton’s method for optimization, and orthogonal least squares are the subject matter of the present work. | |
dc.format.mimetype | application/pdf | |
dc.language.iso | en_US | |
dc.subject | Neural networks | |
dc.subject | Sparsity | |
dc.subject | Second order algorithm | |
dc.subject | Orthogonal least square | |
dc.subject | Hessian matrix | |
dc.title | SECOND ORDER ALGORITHM FOR SPARSELY CONNECTED NEURAL NETWORKS | |
dc.type | Thesis | |
dc.degree.department | Electrical Engineering | |
dc.degree.name | Master of Science in Electrical Engineering | |
dc.date.updated | 2016-10-25T19:24:23Z | |
thesis.degree.department | Electrical Engineering | |
thesis.degree.grantor | The University of Texas at Arlington | |
thesis.degree.level | Masters | |
thesis.degree.name | Master of Science in Electrical Engineering | |
dc.type.material | text | |
Files in this item
- Name:
- KHEIRKHAH-THESIS-2016.pdf
- Size:
- 3.698Mb
- Format:
- PDF
This item appears in the following Collection(s)
Show simple item record