Show simple item record

dc.contributor.authorAswathappa, Babu Hemanth Kumaren_US
dc.date.accessioned2011-03-03T21:53:04Z
dc.date.available2011-03-03T21:53:04Z
dc.date.issued2011-03-03
dc.date.submittedJanuary 2010en_US
dc.identifier.otherDISS-10859en_US
dc.identifier.urihttp://hdl.handle.net/10106/5510
dc.description.abstractA batch training algorithm for feed-forward networks is proposed which uses Newton's method to estimate a vector of optimal scaling factors for output errors in the network. Using this vector, backpropagation is used to modify weights feeding into the hidden units. Linear equations are then solved for the network's output weights. Elements of the new method's Gauss-Newton Hessian matrix are shown to be weighted sums of elements from the total network's Hessian. The effect of output transformation on training a feed-forward network is reviewed and explained, using the concept of equivalent networks. In several examples, the new method performs better than backpropagation and conjugate gradient, with similar numbers of required multiplies. The method performs about as well as Levenberg-Marquardt, with several orders of magnitude fewer multiplies due to the small size of its Hessian.en_US
dc.description.sponsorshipManry, Michael T.en_US
dc.language.isoenen_US
dc.publisherElectrical Engineeringen_US
dc.titleOptimal Output Gain Algorithm For Feed-forward Network Trainingen_US
dc.typeM.S.en_US
dc.contributor.committeeChairManry, Michael T.en_US
dc.degree.departmentElectrical Engineeringen_US
dc.degree.disciplineElectrical Engineeringen_US
dc.degree.grantorUniversity of Texas at Arlingtonen_US
dc.degree.levelmastersen_US
dc.degree.nameM.S.en_US


Files in this item

Thumbnail


This item appears in the following Collection(s)

Show simple item record