Show simple item record

dc.contributor.advisorHuber, Manfred
dc.creatorFakoor, Rasool
dc.date.accessioned2017-10-02T15:05:40Z
dc.date.available2017-10-02T15:05:40Z
dc.date.created2017-08
dc.date.issued2017-08-25
dc.date.submittedAugust 2017
dc.identifier.urihttp://hdl.handle.net/10106/27000
dc.description.abstractEven though recent works on neural architectures have shown promising results at tasks like image recognition, object detection, playing Atari games, etc., learning a mapping from a visual space to a language space or vice versa remains challenging in problems like image/video captioning or question-answering tasks. Furthermore, transferring knowledge between seen and unseen classes in a setting like zero-shot learning is quite challenging given the fact that a model should be able to make a prediction for novel test data belonging to classes for which no examples have been seen during training. To address these issues, this dissertation will first introduce a novel memory-based attention model for video description. Specifically, attention-based models have shown promising and interesting results for image captioning. However, they are not able to model the higher-order interactions involved in problems such as video description/captioning, where the relationship between parts of the video and the concepts being depicted is complex. The proposed model here utilizes memories of past attention when reasoning about where to attend to, in the current time step. Secondly, this dissertation will introduce an end-to-end deep neural network model for attribute-based zero-shot learning with layer-specific regularization that encourages the higher, class-level layers to generalize beyond the training classes. This architecture enables the model to 'transfer' knowledge learned from seen training images to a set of novel, unseen test images.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.subjectVideo captioning
dc.subjectAttention model
dc.subjectDeep learning
dc.subjectTransfer learning
dc.subjectImposing structure
dc.subjectDifferentiable memory
dc.titleNeural Image and Video Understanding
dc.typeThesis
dc.degree.departmentComputer Science and Engineering
dc.degree.nameDoctor of Philosophy in Computer Science
dc.date.updated2017-10-02T15:06:45Z
thesis.degree.departmentComputer Science and Engineering
thesis.degree.grantorThe University of Texas at Arlington
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy in Computer Science
dc.type.materialtext


Files in this item

Thumbnail


This item appears in the following Collection(s)

Show simple item record