ATTENTION: The works hosted here are being migrated to a new repository that will consolidate resources, improve discoverability, and better show UTA's research impact on the global community. We will update authors as the migration progresses. Please see MavMatrix for more information.
Show simple item record
dc.contributor.author | Rezaei, Mohammad | |
dc.contributor.author | Farahanipad, Farnaz | |
dc.contributor.author | Dilhoff, Alex | |
dc.contributor.author | Athitos, Vassilis | |
dc.contributor.author | Elmasri, Ramez | |
dc.date.accessioned | 2023-07-24T20:30:14Z | |
dc.date.available | 2023-07-24T20:30:14Z | |
dc.date.issued | 2021-07-02 | |
dc.identifier.uri | http://hdl.handle.net/10106/31574 | |
dc.description.abstract | Existing learning-based methods require a large number of labeled
data to produce accurate part segmentation labels. However, acquiring ground truth labels is costly, giving rise to a need for methods
that either require fewer labels or can utilize other currently available labels as a form of weak supervision for training. In this paper,
in order to mitigate the burden of labeled-data acquisition, we propose a data-driven method for hand part segmentation on depth
maps without any need for extra effort to obtain segmentation
labels. The proposed method uses the labels already provided by
public datasets in terms of major 3D hand joint locations to learn
to estimate the hand shape and pose given a depth map. Given
the pose and shape of a hand, the corresponding 3D hand mesh is
generated using a deformable hand model and then rendered to a
color image using a texture based on Linear Blend Skinning (LBS)
weights of the hand model. The segmentation labels are then computed from the rendered color image. Since segmentation labels are
not provided with current public datasets, we manually annotate
a subset of the NYU dataset to perform quantitative evaluation of
our method and show that a mIoU of 42% can be achieved with
a model trained without using segmentation-based labels. Both
qualitative and quantitative results confirm the effectiveness of our
method. | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | ACM | en_US |
dc.subject | 3D hand pose estimation, 3D hand shape estimation, semantic segmentation, hand part segmentation, human-computer interaction, Deep Learning, Computer Vision | en_US |
dc.title | Weakly-supervised hand part segmentation from depth images | en_US |
dc.type | Article | en_US |
Files in this item
- Name:
- 3453892.3453902.pdf
- Size:
- 747.0Kb
- Format:
- PDF
- Description:
- Journal Article
This item appears in the following Collection(s)
Show simple item record