Show simple item record

dc.contributor.advisorHuber, Manfred
dc.creatorHosseini Shirvani, Shirin
dc.date.accessioned2023-06-27T23:43:47Z
dc.date.available2023-06-27T23:43:47Z
dc.date.created2021-08
dc.date.issued2021-08-12
dc.date.submittedAugust 2021
dc.identifier.urihttp://hdl.handle.net/10106/31345
dc.description.abstractTrees as acyclic graphs are ubiquitous in representing different context where they encode connectivity patterns at all scales of organization, from biological systems to social networks. Trees are powerful resources which have been used many times for the exploration and discovery of interactions and properties in different context. Tree data structure representation approaches have led to remarkable discoveries in different real-world applications. In the last decades, extensive research and algorithms have been developed on tree or acyclic graph data structures with deep theoretical properties. The cost of solving these various problems ranges from simple linear time algorithms, to more complex ones that solve NP-hard problems. However, trees as acyclic graph datasets are generally not easily amenable to data-driven techniques, and unlike other big data will not easily benefit from the growing scale of available data with sparse nature. Recently deep learning has shown significant progress for images, texts and signals, typically with little domain knowledge. However, the combinatorial and discrete nature of tree space or acyclic graph data makes it non-trivial to apply deep learning in this domain. This thesis investigates several aspects of how to build a connection between deep reinforcement learning and the classical algorithms for tree space. This dissertation introduces the use of reinforcement learning for closing the gap between these two complementary approaches. Commonly greedy, heuristic and supervised clustering methods provide automated tools to discover hidden built-in structures in generally complex-shaped and high-dimensional configuration spaces of trees. We show some potential applications of such tree transformation tools and sampling strategies to a common problem related to tree distance and manipulation with applications to problems in genomics, planning and control. The first part of the dissertation presents the use of reinforcement learning for relaxed, deterministic coordination and control of an agent to learn a tree edit distance task. We reinterpret this classical method task for unsupervised learning as an abstract formalism for identifying and representing tree transformations by relating the continuous space of configurations to the combinatorial space of trees. The second part of the dissertation introduces a generalized approach with automated representation exploration in an edit neighborhood representation, learning to identify a neighborhood of a tree that captures the local geometric structure of a configuration space around the tree instantaneous configuration. Based on this edit neighborhood representation, we use reinforcement learning to learn a NNI distance strategy to find the minimum-cost sequence of operations that transform one tree into another. The third part of the dissertation presents a generic framework for learning representation and behavior on a tree dataset in hyperbolic space on a finite action space, integrating policy learning using reinforcement learning. In this work, we study the use of value function approximation in hyperbolic space and reinforcement learning problems with high dimensional state and a finite action space based on a generalized representation and policy iteration. The obtained results strongly suggest that reinforcement learning is, indeed, an effective approach for automatically extracting inherent structures in configuration spaces relevant to the solution of tree edit distance, and that it might play a key role in the design of computationally efficient planners in complex, high-dimensional configuration spaces in different application domains.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.subjectDeep reinforcement learning
dc.subjectTree transformation representation
dc.subjectHyperbolic geometric
dc.subjectRecurrent neural network
dc.titleREPRESENTATION AND STRATEGY LEARNING FOR VARIABLE-SIZE TREE TRANSFORMATION USING REINFORCEMENT LEARNING
dc.typeThesis
dc.date.updated2023-06-27T23:43:48Z
thesis.degree.departmentComputer Science and Engineering
thesis.degree.grantorThe University of Texas at Arlington
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy in Computer Science
dc.type.materialtext
local.embargo.terms2023-08-01
local.embargo.lift2023-08-01


Files in this item

Thumbnail


This item appears in the following Collection(s)

Show simple item record