Show simple item record

dc.contributor.authorJesudhas, Praveenen_US
dc.date.accessioned2011-03-03T21:52:33Z
dc.date.available2011-03-03T21:52:33Z
dc.date.issued2011-03-03
dc.date.submittedJanuary 2010en_US
dc.identifier.otherDISS-10915en_US
dc.identifier.urihttp://hdl.handle.net/10106/5476
dc.description.abstractThe effects of transforming the net function vector in the Multilayer Perceptron (MLP) are analyzed. The use of optimal diagonal transformation matrices on the net function vector is proved to be equivalent to training the MLP using multiple optimal learning factors (MOLF). A method for linearly compressing large ill-conditioned MOLF Hessian matrices into smaller well-conditioned ones is developed. This compression approach is shown to be equivalent to using several hidden units per learning factor. The technique is extended to large networks. In simulations, the proposed algorithm performs almost as well as the Levenberg Marquardt (LM) algorithm with the computational complexity of a first order training algorithm.en_US
dc.description.sponsorshipManry, Michaelen_US
dc.language.isoenen_US
dc.publisherElectrical Engineeringen_US
dc.titleAnalysis And Improvement Of Multiple Optimal Learning Factors For Feed-forward Networksen_US
dc.typeM.S.en_US
dc.contributor.committeeChairManry, Michael T.en_US
dc.degree.departmentElectrical Engineeringen_US
dc.degree.disciplineElectrical Engineeringen_US
dc.degree.grantorUniversity of Texas at Arlingtonen_US
dc.degree.levelmastersen_US
dc.degree.nameM.S.en_US


Files in this item

Thumbnail
Thumbnail


This item appears in the following Collection(s)

Show simple item record