ATTENTION: The works hosted here are being migrated to a new repository that will consolidate resources, improve discoverability, and better show UTA's research impact on the global community. We will update authors as the migration progresses. Please see MavMatrix for more information.
Show simple item record
dc.contributor.advisor | Beksi, William J | |
dc.creator | Arshad, Mohammad Samiul | |
dc.date.accessioned | 2023-09-27T16:30:34Z | |
dc.date.available | 2023-09-27T16:30:34Z | |
dc.date.created | 2023-08 | |
dc.date.issued | 2023-08-15 | |
dc.date.submitted | August 2023 | |
dc.identifier.uri | http://hdl.handle.net/10106/31738 | |
dc.description.abstract | 3D point clouds are a popular form of data representation with many
applications in computer vision, computer graphics, and robotics. As the output
of range sensing devices, point clouds have gained popularity with the current
interest in self-driving vehicles. More formally, point clouds are an unordered
set of irregular points collected from the surface of an object. Each point
consists of a Cartesian coordinate, along with additional information such as
an RGB color value and surface normal estimate. However, deep learning methods
fall short in the processing of 3D point clouds due to the irregular and
permutation-invariant nature of the data.
In this dissertation, we design novel types of neural networks that leverage raw
3D point clouds for data creation and reconstruction. First, we investigate
dense colored point cloud generation and present an understanding of shape color
correlation with a progressive conditional generative adversarial network
(PCGAN). PCGAN learns to create a 3D data distribution by producing colored
point clouds with subtle details at a range of resolutions. Next, we
reconstruct open surfaces with inner details by extracting surface points from
an unsigned distance field with an implicit point voxel network (IPVNet). In
IPVNet, we show that by combining features from different 3D representations
such as point clouds and voxels, deep learning models can reduce both
inaccuracies and the number of outliers in the reconstruction. Finally, we
discuss reconstructing a 3D surface from a single image by learning an implicit
function through a spatial transformer (LIST). Within the LIST framework, we
introduce an innovative spatial transformer that creates the ability to
accurately retrieve intricate details from a single image without the need for
any additional rendering information.
Overall, we provide a comprehensive investigation of generative and implicit
point cloud processing techniques. We establish novel deep-learning frameworks
to facilitate the 3D reconstruction and generation tasks. Additionally, we make
our source code and other resources publicly available for the benefit of the
research community. | |
dc.format.mimetype | application/pdf | |
dc.language.iso | en_US | |
dc.subject | 3D reconstruction | |
dc.subject | 3D generation | |
dc.title | Generative and Implicit Methods for 3D Point Cloud Processing | |
dc.type | Thesis | |
dc.date.updated | 2023-09-27T16:30:34Z | |
thesis.degree.department | Computer Science and Engineering | |
thesis.degree.grantor | The University of Texas at Arlington | |
thesis.degree.level | Doctoral | |
thesis.degree.name | Doctor of Philosophy in Computer Science | |
dc.type.material | text | |
dc.creator.orcid | 0000-0001-6271-7814 | |
Files in this item
- Name:
- ARSHAD-DISSERTATION-2023.pdf
- Size:
- 37.62Mb
- Format:
- PDF
This item appears in the following Collection(s)
Show simple item record