ATTENTION: The works hosted here are being migrated to a new repository that will consolidate resources, improve discoverability, and better show UTA's research impact on the global community. We will update authors as the migration progresses. Please see MavMatrix for more information.
Show simple item record
dc.contributor.advisor | Shiakolas, Panayiotis S. | |
dc.creator | Patel, Ravi | |
dc.date.accessioned | 2019-07-08T22:38:13Z | |
dc.date.available | 2019-07-08T22:38:13Z | |
dc.date.created | 2018-05 | |
dc.date.issued | 2018-05-14 | |
dc.date.submitted | May 2018 | |
dc.identifier.uri | http://hdl.handle.net/10106/28310 | |
dc.description.abstract | The objective of this research is to investigate a way of interaction between humans and robots, which is through voice or speech commands. A Biomimetic Artificial Hand (BAH) is used as a platform to perform grasping tasks using human voice as an interacting and instructing medium between humans and robots. It is a hands-free approach of issuing commands to the BAH since it does not require the user to wear any specialized equipment. Previous research has shown difficulties in recognizing more than one word, database management for stored voice, and requirement of sufficient computing power. National Instruments software LabVIEW and hardware myRIO are used as the interface between the user and BAH. The concept of using cloud application services is applied, which is based on using the speech recognition Application Program Interface (API) by Microsoft which accepts a verbal command, transfers to the cloud for further processing and returns the command in string (text) form. This approach reduces the use of local computing power requirements and yields fast and accurate speech recognition (SR). A vision system is also incorporated as a a safety feature to verify the presence of the correct object in the workspace. The string results returned from the API is further locally processed to identify the action to perform, object, object identifiers (number, color, size) and grasping pattern of object from the existing database. Voice command evaluation performed on the hardware platform with a biomimetic artificial hand indicates that the proposed interaction modality could be advantageously employed for successfully instructing or interacting with a robotic device. | |
dc.format.mimetype | application/pdf | |
dc.subject | Speech recognition | |
dc.subject | LabVIEW | |
dc.subject | Cloud applications | |
dc.subject | APIs | |
dc.subject | Computer vision | |
dc.title | HUMAN ROBOT INTERACTION WITH CLOUD ASSISTED VOICE CONTROL AND VISION SYSTEM | |
dc.type | Thesis | |
dc.degree.department | Mechanical and Aerospace Engineering | |
dc.degree.name | Master of Science in Mechanical Engineering | |
dc.date.updated | 2019-07-08T22:38:13Z | |
thesis.degree.department | Mechanical and Aerospace Engineering | |
thesis.degree.grantor | The University of Texas at Arlington | |
thesis.degree.level | Masters | |
thesis.degree.name | Master of Science in Mechanical Engineering | |
dc.type.material | text | |
dc.creator.orcid | 0000-0002-5412-0222 | |
Files in this item
- Name:
- PATEL-THESIS-2018.pdf
- Size:
- 20.65Mb
- Format:
- PDF
This item appears in the following Collection(s)
Show simple item record