Show simple item record

dc.contributor.advisorMakedon, Fillia
dc.creatorJaiswal, Ashish
dc.date.accessioned2024-01-31T18:25:38Z
dc.date.available2024-01-31T18:25:38Z
dc.date.created2023-12
dc.date.issued2023-11-10
dc.date.submittedDecember 2023
dc.identifier.urihttp://hdl.handle.net/10106/31942
dc.description.abstractCognition is the mental process of acquiring knowledge and understanding through thought, experience, and senses. Fatigue is a loss in cognitive or physical performance due to physiological factors such as insufficient sleep, long work hours, stress, and physical exertion. It adversely affects the human body and can slow reaction times, reduce attention, and limit short-term memory. Hence, there is a need to monitor a person's state to avoid extreme fatigue conditions that can result in physiological complications. However, tools to understand and assess fatigue are minimal. This thesis primarily focuses on building an experimental setup that induces cognitive fatigue (CF) and physical fatigue (PF) through multiple cognitive and physical tasks while simultaneously recording physiological data and visual cues from a person's face. First, we build a prototype sensor shirt embedded with various physiological sensors for easy use during cognitively and physically demanding tasks. Second, participants' self-reported visual analog scores (VAS) are reported after each task to confirm fatigue induction. Finally, an evaluation system is built that utilizes machine learning (ML) models to detect states of CF and PF from multi-modal sensor data, thus providing an objective measure. This effort is the first step towards building a robust cognitive assessment tool that can collect multi-modal data and be used for industrial applications to monitor a person's mental state. For instance, it enables safe human-robot cooperation (HRC) in industrial environments to avoid physical harm when a person's mental state is not good. Another example can be a personalized assistive robot for individuals with motor impairments to perform a task such as preparing lunch with real-time interventions based on the help required from the user.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.subjectMultimodal
dc.subjectPhysiological signals
dc.subjectCognition
dc.subjectMachine learning
dc.titleAN INTELLIGENT MULTI-MODAL FRAMEWORK TOWARDS ASSESSING HUMAN COGNITION
dc.typeThesis
dc.date.updated2024-01-31T18:25:39Z
thesis.degree.departmentComputer Science and Engineering
thesis.degree.grantorThe University of Texas at Arlington
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy in Computer Science
dc.type.materialtext
dc.creator.orcid0000-0002-6613-4955


Files in this item

Thumbnail


This item appears in the following Collection(s)

Show simple item record