Peer-reviewed | Open Access | Multidisciplinary
Artificial Intelligence (AI) systems are increasingly becoming integral to decision-making processes in critical domains such as healthcare, finance, criminal justice, and recruitment. While these systems offer remarkable capabilities, they often reflect the imperfections of the data they are trained on. This paper investigates the concept of inherited AI bias, a phenomenon where machine learning models unintentionally assimilate and reproduce societal prejudices, stereotypes, and historical inequities embedded in human-generated datasets. We provide a comprehensive review of the root causes of such bias, including imbalanced training data, biased annotation practices, and algorithmic structures that lack fairness constraints. By examining real-world case studies and empirical research, we demonstrate how these biases disproportionately impact marginalized groups, leading to discriminatory outcomes and reinforcing systemic disparities. The paper also reviews state-of-the-art techniques for bias detection and mitigation, critically assessing their strengths and limitations in practical applications. As AI systems continue to permeate our daily lives, addressing inherited bias is not merely a technical challenge but an ethical imperative. This review underscores the urgency of developing transparent, inclusive, and accountable AI frameworks. Finally, we identify current gaps in research and propose directions for future work, aimed at fostering the development of equitable AI systems that align with democratic values and social justice.
Keywords: Artificial Intelligence Ethics, Algorithmic Bias, Fairness in Machine Learning, Bias Mitigation Techniques, Societal Impact of AI, Responsible AI Systems