Show simple item record

dc.contributor.advisorManry, Michael T.
dc.creatorNguyen, Son Nam
dc.date.accessioned2019-08-27T18:53:17Z
dc.date.available2019-08-27T18:53:17Z
dc.date.created2019-08
dc.date.issued2019-08-06
dc.date.submittedAugust 2019
dc.identifier.urihttp://hdl.handle.net/10106/28604
dc.description.abstractTraining methods for both shallow and deep neural nets are dominated by first order algorithms related to back propagation and conjugate gradient. However, these methods lack affine invariance so performance is damaged by nonzero input means, dependent inputs, dependent hidden units and the use of only one learning factor. This dissertation reviews affine invariance and shows how MLP training can be made partially affine invariant when Newton's method is used to train small numbers of MLP parameters. Several novel methods are proposed for scalable partially affine invariant MLP training. The potential application of the algorithm to deep learning is discussed. Ten-fold testing errors for several datasets show that the proposed algorithm outperforms back propagation and conjugate gradient, and that it scales far better than Levenberg-Marquardt.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.subjectBack propagation
dc.subjectVanishing gradient
dc.subjectBalanced gradient
dc.titleAFFINE INVARIANCE IN MULTILAYER PERCEPTRON TRAINING
dc.typeThesis
dc.degree.departmentElectrical Engineering
dc.degree.nameDoctor of Philosophy in Electrical Engineering
dc.date.updated2019-08-27T18:55:28Z
thesis.degree.departmentElectrical Engineering
thesis.degree.grantorThe University of Texas at Arlington
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy in Electrical Engineering
dc.type.materialtext
dc.creator.orcid0000-0001-9409-4738


Files in this item

Thumbnail


This item appears in the following Collection(s)

Show simple item record