Show simple item record

dc.contributor.advisorMakedon, Fillia
dc.contributor.advisorKarkaletsis, Vangelis
dc.creatorPapakostas, Michalis
dc.date.accessioned2019-05-28T21:51:54Z
dc.date.available2019-05-28T21:51:54Z
dc.date.created2019-05
dc.date.issued2019-04-23
dc.date.submittedMay 2019
dc.identifier.urihttp://hdl.handle.net/10106/28131
dc.description.abstractArtificial Intelligence has probably been the most rapidly evolving field of science during the last decade. Its numerous real-life applications have radically altered the way we experience daily-living with great impact in some of the most basic aspects of human lives including but not limited to health and well-being, communication and interaction, education, driving, daily, and entertainment. Human-Computer Interaction (HCI) is the field of Computer Science lying in the epicenter of this evolution and is responsible for transforming rudimentary research findings and theoretical principles into intuitive tools, responsible for enhancing human performance, increasing productivity and ensuring safety. Two of the core questions that HCI research tries to address relate to a) what does user want? and b) what can the user do? Multi-modal user monitoring has shown great potential towards answering those questions. Modeling and tracking different parameters of user's behavior has provided groundbreaking solutions in several fields such as smart rehabilitation, smart driving, and workplace-safety. Two of the dominant modalities that have been extensively deployed for such systems are speech and vision-based approaches with a special focus on activity and emotion recognition. Despite the great amount of research that has been done in these domains, there are numerous other implicit and explicit types of user-feedback produced during an HCI scenario, that are very informative but have attracted very limited research interest. This is usually due to the great levels of inherent noise that such signals tend to carry, or due to the highly invasive equipment that is required to capture this kind of information. These factors make most real-life applications almost impossible to implement. This research aims to investigate the potentials of multi-modal user monitoring towards designing personalized scenarios and interactive interfaces that focus on two different research axis. Firstly we explore the advantages of reusing existing knowledge across different information domains, application areas, and individual users in an effort to create predictive models that can expand their functionalities between distinct HCI scenarios. Secondly, we try to enhance multi-modal interaction by accessing information that stems from more sophisticated and less explored sources such as Electroencephalogram (EEG) and Electromyogram (EMG) analysis using minimally invasive sensors. We achieve this by designing a series of end-to-end experiments (from data collection to analysis and application) and by performing an extensive evaluation on various Machine Learning (ML) and Deep-Learning (DL) approaches on their ability to model diverge signals of interaction. As an outcome of this in-depth investigation and experimentation, we propose CogBeacon. A multi-modal dataset and data-collection platform, to our knowledge the first of its kind, towards predicting events of cognitive fatigue and understanding its impact on human performance.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.subjectUser modeling and monitoring
dc.subjectMachine learning
dc.subjectDeep learning
dc.subjectCognitive and behavioral modeling
dc.subjectHCI
dc.titleFROM BODY TO BRAIN: USING ARTIFICIAL INTELLIGENCE TO IDENTIFY USER SKILLS & INTENTIONS IN INTERACTIVE SCENARIOS
dc.typeThesis
dc.degree.departmentComputer Science and Engineering
dc.degree.nameDoctor of Philosophy in Computer Science
dc.date.updated2019-05-28T21:51:54Z
thesis.degree.departmentComputer Science and Engineering
thesis.degree.grantorThe University of Texas at Arlington
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy in Computer Science
dc.type.materialtext
dc.creator.orcid0000-0002-2794-9115


Files in this item

Thumbnail


This item appears in the following Collection(s)

Show simple item record