Journal of Scientific Innovation and Advanced Research (JSIAR)

Peer-reviewed | Open Access | Multidisciplinary

Journal of Scientific Innovation and Advanced Research (JSIAR) Published: April 2026 Volume: 3, Issue: 1 Pages: 125-142

Explainable AI-Powered ATS Resume Analyzer with Bias Detection and Transparent Candidate Ranking

Original Research Article
Priyanshu Sharma1
1Department of Computer Science and Engineering, Noida International University, Greater Noida, India
Simran Saifi2
2Department of Computer Science and Engineering, Noida International University, Greater Noida, India
Anshi Kumari3
3Department of Computer Science and Engineering, Noida International University, Greater Noida, India
Saurabh Kumar4
4Department of Computer Science and Engineering, Noida International University, Greater Noida, India
*Author for correspondence: Priyanshu Sharma
Department of Computer Science and Engineering, Noida International University, Greater Noida, India
E-mail ID: priyanshu.sharmaedu@gmail.com

ABSTRACT

Automated Applicant Tracking Systems (ATS) have become integral to large-scale recruitment processes, enabling organizations to process thousands of resumes within constrained decision timelines. Despite their operational efficiency, contemporary ATS deployments often rely on opaque machine learning pipelines that prioritize predictive accuracy while offering limited interpretability and insufficient safeguards against algorithmic bias. Such opacity raises critical concerns regarding fairness, accountability, and regulatory compliance, particularly when hiring decisions influence socioeconomic mobility and workforce diversity. Motivated by these challenges, this study presents an Explainable Artificial Intelligence (XAI)-powered ATS resume analyzer designed to deliver transparent candidate ranking while systematically identifying and quantifying potential bias in automated screening workflows. The proposed framework integrates advanced Natural Language Processing (NLP) techniques for structured resume parsing, semantic skill extraction, and contextual job-description matching. Candidate suitability is estimated using a hybrid learning architecture that combines gradient-boosted decision trees and interpretable linear models trained on publicly available recruitment datasets, including curated subsets derived from open hiring corpora and anonymized resume repositories. Feature vectors capturing qualifications, experience duration, technical competencies, and domain relevance are mapped to a normalized suitability score through a weighted decision function expressed as \[ S(r) = \sum_{i=1}^{n} w_i x_i, \] where $x_i$ denotes standardized candidate attributes and $w_i$ represents learned model coefficients reflecting feature importance. To ensure transparency in decision-making, the system employs model-agnostic explanation mechanisms based on Shapley value decomposition, enabling localized interpretation of prediction outcomes and providing human-readable justifications for ranking assignments. Beyond interpretability, the framework introduces a quantitative bias detection module grounded in statistical fairness theory. Disparities across demographic or institutional categories are evaluated using fairness indicators such as Statistical Parity Difference (SPD) and Disparate Impact (DI), defined respectively as \[ SPD = P(\hat{Y}=1 \mid A=0) - P(\hat{Y}=1 \mid A=1), \qquad DI = \frac{P(\hat{Y}=1 \mid A=0)}{P(\hat{Y}=1 \mid A=1)}, \] where $\hat{Y}$ denotes the predicted hiring decision and $A$ represents a protected attribute. These metrics are computed dynamically during inference, enabling the system to flag anomalous decision patterns and support corrective interventions. Experimental validation was conducted using a controlled evaluation environment implemented in Python with the Scikit-learn and SHAP libraries, executed on a workstation equipped with multi-core processing and standard memory resources. Comparative analysis against baseline ATS classifiers demonstrated measurable improvements in interpretability consistency, reduction in fairness disparities, and enhanced trustworthiness of automated hiring recommendations, while maintaining competitive predictive performance across standard evaluation metrics. Collectively, the findings indicate that embedding explainability and fairness auditing directly into ATS architectures can transform automated recruitment from a purely efficiency-driven process into a transparent and ethically aligned decision-support system. The principal contribution of this work lies in the design and empirical validation of an integrated, explainable resume screening framework that simultaneously advances candidate ranking accuracy, bias detection capability, and accountability in AI-assisted hiring environments.

Keywords: Explainable AI, Applicant Tracking System, Resume Screening, Bias Detection, Fairness Metrics, Natural Language Processing, Candidate Ranking, Machine Learning