The Impact of Structured Validation and Audit Frameworks on the Fairness and Efficiency of AI-Driven Hiring Systems
DOI:
https://doi.org/10.15662/IJRAI.2025.0806023Keywords:
algorithmic hiring, AI audit, adverse impact, AI assurance, time to hire, recruitment analyticsAbstract
Organizations increasingly use artificial intelligence (AI) to screen and rank job applicants, yet these systems can produce disparate outcomes across protected groups and may add operational friction when they require manual review or compliance documentation. Prior work has proposed algorithmic auditing and assurance frameworks, but empirical evidence linking audit intensity to both fairness and hiring efficiency remains limited within the same operational context. Building on internal algorithmic auditing guidance (Raji et al., 2020) and audit systematization in recruitment (Kazim et al., 2021), this study evaluates whether structured validation and audit frameworks are associated with improved fairness and reduced time to hire. Using an applicant-tracking-system dataset from a logistics supply-chain employer (N = 1,906 active applicants; n = 400 hires), we compared a pre-AI manual baseline (2023) with two 2024 AI screening configurations: a compliance-only audit and an assurance-level audit. Fairness was operationalized with adverse impact ratios (AIRs; UGESP, 1978), and efficiency was operationalized as time to hire (days). Results indicated that assurance-level auditing coincided with substantial improvements in AIRs (e.g., minority AIR increased from 0.12 in the manual baseline to 0.85 under assurance) while also reducing time to hire (M difference = 12.87 days relative to the baseline). Logistic and linear models controlling for job family supported these patterns. Findings suggest that structured, higher-intensity audit frameworks can be associated with simultaneous gains in fairness and process efficiency, warranting replication in multi-site, longitudinal studies under emerging audit mandates (e.g., NYC Local Law 144).References
1. Bellamy, R. K. E., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., Lohia, P., Martino, J., Mehta, S., Mojsilovic, A., Nagar, S., Ramamurthy, K. N., Richards, J., Saha, D., Sattigeri, P., Singh, M., Varshney, K. R., & Zhang, Y. (2018). AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv. https://arxiv.org/abs/1810.01943
2. Costanza-Chock, S., Raji, I. D., & Buolamwini, J. (2022). Who audits the auditors? Recommendations from a field scan of the algorithmic auditing ecosystem. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22). Association for Computing Machinery. https://doi.org/10.1145/3531146.3533213
3. Filippi, G., Zannone, S., Hilliard, A., & Koshiyama, A. S. (2023). Local Law 144: A critical analysis of regression metrics. arXiv. https://arxiv.org/abs/2302.04119
4. Groves, L., Metcalf, J., Kennedy, A., Vecchione, B., & Strait, A. (2024). Auditing work: Exploring the New York City algorithmic bias audit regime. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24). Association for Computing Machinery. https://doi.org/10.1145/3630106.3658959
5. International Organization for Standardization & International Electrotechnical Commission. (2023). Information technology—Artificial intelligence—Management system (ISO/IEC 42001:2023). ISO.
6. Kazim, E., Koshiyama, A. S., Hilliard, A., & Polle, R. (2021). Systematizing audit in algorithmic recruitment. Journal of Intelligence, 9(3), 46. https://doi.org/10.3390/jintelligence9030046
7. Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. In Proceedings of the 8th Innovations in Theoretical Computer Science Conference (ITCS 2017) (Article 43). Schloss Dagstuhl–Leibniz-Zentrum für Informatik. https://doi.org/10.4230/LIPIcs.ITCS.2017.43
8. National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0) (NIST AI 100-1). https://doi.org/10.6028/NIST.AI.100-1
9. New York City Department of Consumer and Worker Protection. (n.d.). Automated employment decision tools (AEDT). Retrieved December 29, 2025, from https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page
10. Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAccT ’20) (pp. 469–481). Association for Computing Machinery. https://doi.org/10.1145/3351095.3372828
11. Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAccT ’20) (pp. 33–44). Association for Computing Machinery. https://doi.org/10.1145/3351095.3372873
12. U.S. Equal Employment Opportunity Commission, U.S. Department of Labor, U.S. Department of Justice, & U.S. Civil Service Commission. (1978). Uniform guidelines on employee selection procedures (29 C.F.R. § 1607). https://www.ecfr.gov/current/title-29/subtitle-B/chapter-XIV/part-1607
13. Wilson, C., Ghosh, A., Jiang, S., Mislove, A., Baker, L., Szary, J., Trindel, K., & Polli, F. (2021). Building and auditing fair algorithms: A case study in candidate screening. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21). Association for Computing Machinery. https://doi.org/10.1145/3442188.3445928
14. Wright, R., Hilliard, A., Koshiyama, A. S., & Filippi, G. (2024). Null compliance? Measuring bias audits under NYC Local Law 144. arXiv. https://arxiv.org/abs/2406.01399





