ATTENTION: The works hosted here are being migrated to a new repository that will consolidate resources, improve discoverability, and better show UTA's research impact on the global community. We will update authors as the migration progresses. Please see MavMatrix for more information.
Show simple item record
dc.contributor.advisor | Lei, Yu | |
dc.creator | Patel, Ankita Ramjibhai | |
dc.date.accessioned | 2022-06-28T15:12:26Z | |
dc.date.available | 2022-06-28T15:12:26Z | |
dc.date.created | 2022-05 | |
dc.date.issued | 2022-05-16 | |
dc.date.submitted | May 2022 | |
dc.identifier.uri | http://hdl.handle.net/10106/30411 | |
dc.description.abstract | Machine Learning (ML) models could exhibit biased behavior, or algorithmic discrimination, resulting in unfair or discriminatory outcomes. The bias in the ML model could emanate from various factors such as the training dataset, the choice of the ML algorithm, or the hyperparameters used to train the ML model. In addition to evaluating the model’s correctness, it is essential to test ML models for fair and unbiased behavior. In this thesis, we present a combinatorial testing-based approach to perform fairness testing of ML models. Our approach is model agnostic and evaluates fairness violations of a pre-trained ML model in a two-step process. In the first step, we create an input parameter model from the training data set and then use the model to generate a t-way test set. In the second step, for each test, we modify the value of one or more protected attributes to see if we could find fairness violations. We performed an experimental evaluation of the proposed approach using ML models trained with tabular datasets. The results suggest that the proposed approach can successfully identify fairness violations in pre-trained ML models.
This thesis is presented in an article-based format and includes a research paper. This paper reports our work on applying combinatorial testing to identify fairness violations in Machine Learning (ML) models. This paper has been accepted at a peer-reviewed venue (In press). | |
dc.format.mimetype | application/pdf | |
dc.subject | Fairness testing | |
dc.subject | Algorithmic discrimination | |
dc.subject | Bias detection | |
dc.subject | Testing model bias | |
dc.subject | Testing ML model | |
dc.subject | Combinatorial testing | |
dc.title | A COMBINATORIAL APPROACH TO FAIRNESS TESTING OF MACHINE LEARNING MODELS | |
dc.type | Thesis | |
dc.degree.department | Computer Science and Engineering | |
dc.degree.name | Master of Science in Computer Science | |
dc.date.updated | 2022-06-28T15:12:26Z | |
thesis.degree.department | Computer Science and Engineering | |
thesis.degree.grantor | The University of Texas at Arlington | |
thesis.degree.level | Masters | |
thesis.degree.name | Master of Science in Computer Science | |
dc.type.material | text | |
Files in this item
- Name:
- PATEL-THESIS-2022.pdf
- Size:
- 323.8Kb
- Format:
- PDF
This item appears in the following Collection(s)
Show simple item record