A Genealogical Approach to Algorithmic Bias
2024, Minds and Machines
https://doi.org/10.1007/S11023-024-09672-2Abstract
The Fairness, Accountability, and Transparency (FAccT) literature tends to focus on bias as a problem that requires ex post solutions (e.g. fairness metrics), rather than addressing the underlying social and technical conditions that (re)produce it. In this article, we propose a complementary strategy that uses genealogy as a constructive, epistemic critique to explain algorithmic bias in terms of the conditions that enable it. We focus on XAI feature attributions (Shapley values) and counterfactual approaches as potential tools to gauge these conditions and offer two main contributions. One is constructive: we develop a theoretical framework to classify these approaches according to their relevance for bias as evidence of social disparities. We draw on Pearl's ladder of causation (Causality: models, reasoning, and inference.
References (53)
- Aas, K., Jullum, M., & Løland, A. (2021). Explaining individual predictions when features are depend- ent: More accurate approximations to Shapley values. Artificial Intelligence, 298, 103502. https:// doi. org/ 10. 1016/j. artint. 2021. 103502
- ACLU California Action. (2020). AB 256. ACLU California Action. https:// acluc alact ion. org/ bill/ ab-256/ Abdollahi, B., & Nasraoui, O. (2018). Transparency in fair machine learning: The case of explain- able recommender systems. In J. Zhou & F. Chen (Eds.), Human and machine learning: Vis- ible, explainable, trustworthy and transparent (pp. 21-35). Springer. https:// doi. org/ 10. 1007/ 978-3-319-90403-0_2
- Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intel- ligence (XAI). IEEE Access, 6, 52138-52160. https:// doi. org/ 10. 1109/ ACCESS. 2018. 28700 52
- Agyeman, J. (2021, March 9). How urban planning and housing policy helped create 'food apartheid' in US cities. The Conversation. http:// theco nvers ation. com/ how-urban-plann ing-and-housi ng-policy- helped-create-food-apart heid-in-us-cities-154433
- Aivodji, U., Arai, H., Fortineau, O., Gambs, S., Hara, S., & Tapp, A. (2019). Fairwashing: The risk of rationalization. In Proceedings of the 36th international conference on machine learning, 2019 (pp. 161-170). https:// proce edings. mlr. press/ v97/ aivod ji19a. html
- Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media and Society, 20(3), 973-989. https:// doi. org/ 10. 1177/ 14614 44816 676645
- Barabas, C., Dinakar, K., Ito, J., Virza, M., & Zittrain, J. (2018). Interventions over predictions: Refram- ing the ethical debate for actuarial risk assessment. arXiv: 1712. 08238 [Cs, Stat]. http:// arxiv. org/ abs/ 1712. 08238
- Barocas, S., & Selbst, A. D. (2016). Big data's disparate impact. SSRN Electronic Journal. https:// doi. org/ 10. 2139/ ssrn. 24778 99
- Barocas, S., Selbst, A. D., & Raghavan, M. (2020). The hidden assumptions behind counterfactual expla- nations and principal reasons. In Proceedings of the 2020 conference on fairness, accountability, and transparency, 2020 (pp. 80-89). https:// doi. org/ 10. 1145/ 33510 95. 33728 30
- Begley, T., Schwedes, T., Frye, C., & Feige, I. (2020). Explainability for fair machine learning. arXiv: 2010. 07389 [Cs, Stat]. http:// arxiv. org/ abs/ 2010. 07389
- Christman, J. (2020). Autonomy in Moral and Political Philosophy. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2020). Metaphysics Research Lab, Stanford University. https:// plato. stanf ord. edu/ archi ves/ fall2 020/ entri es/ auton omy-moral/
- Citron, D. K., & Pasquale, F. A. (2014). The scored society: Due process for automated predictions (SSRN Scholarly Paper ID 2376209). Social Science Research Network. https:// papers. ssrn. com/ abstr act= 23762 09
- Datta, A., Sen, S., & Zick, Y. (2016). Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In 2016 IEEE symposium on security and privacy (SP), 2016 (pp. 598-617). https:// doi. org/ 10. 1109/ SP. 2016. 42
- Dodge, J., Liao, Q. V., Zhang, Y., Bellamy, R. K. E., & Dugan, C. (2019). Explaining models: An empirical study of how explanations impact fairness judgment. In Proceedings of the 24th international confer- ence on intelligent user interfaces, 2019 (pp. 275-285). https:// doi. org/ 10. 1145/ 33012 75. 33023 10
- Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv: 1702. 08608 [Cs, Stat]. http:// arxiv. org/ abs/ 1702. 08608
- Frye, C., Rowat, C., & Feige, I. (2020). Asymmetric Shapley values: Incorporating causal knowledge into model-agnostic explainability. In Advances in neural information processing systems, 2020 (Vol. 33, pp. 1229-1239). https:// proce edings. neuri ps. cc/ paper/ 2020/ hash/ 0d770 c496a a3da6 d2c3f 2bd19 e7b9d 6b-Abstr act. html
- Galhotra, S., Pradhan, R., & Salimi, B. (2021). Explaining black-box algorithms using probabilistic contras- tive counterfactuals (arXiv: 2103. 11972). https:// doi. org/ 10. 48550/ arXiv. 2103. 11972
- Greiner, D. J. (2008). Casual inference in civil rights litigation. Harvard Law Review, 122, 533.
- Grgić-Hlača, N., Zafar, M. B., Gummadi, K. P., & Weller, A. (2018). Beyond distributive fairness in algorith- mic decision making: feature selection for procedurally fair learning. In Proceedings of the AAAI con- ference on artificial intelligence, 2018 (Vol. 32(1), Article 1). https:// doi. org/ 10. 1609/ aaai. v32i1. 11296
- Hacker, P. (2022). The European AI Liability Directives -Critique of a Half-Hearted Approach and Lessons for the Future. SSRN Electronic Journal. https:// doi. org/ 10. 2139/ ssrn. 42797 96
- Haibe-Kains, B., Adam, G. A., Hosny, A., Khodakarami, F., Waldron, L., Wang, B., McIntosh, C., Gold- enberg, A., Kundaje, A., Greene, C. S., Broderick, T., Hoffman, M. M., Leek, J. T., Korthauer, K., Huber, W., Brazma, A., Pineau, J., Tibshirani, R., Hastie, T.,…,Aerts, H. J. W. L. (2020). Transpar- ency and reproducibility in artificial intelligence. Nature, 586(7829), 7829. https:// doi. org/ 10. 1038/ s41586-020-2766-y
- Heskes, T., Sijben, E., Bucur, I. G., & Claassen, T. (2020). Causal Shapley values: Exploiting causal knowl- edge to explain individual predictions of complex models (arXiv: 2011. 01625). https:// doi. org/ 10. 48550/ arXiv. 2011. 01625
- Hill, R. K. (2016). Genealogy. In Routledge encyclopedia of philosophy (1st ed.). Routledge. https:// doi. org/ 10. 4324/ 97804 15249 126-DE024-1
- Hu, L. (2019). Disparate causes, Pt. I. Phenomenal World. https:// www. pheno menal world. org/ analy sis/ dispa rate-causes-i/
- Jung, Y., Kasiviswanathan, S., Tian, J., Janzing, D., Bloebaum, P., & Bareinboim, E. (2022). On measur- ing causal contributions via do-interventions. In Proceedings of the 39th international conference on machine learning, 2022 (pp. 10476-10501). https:// proce edings. mlr. press/ v162/ jung2 2a. html
- Karimi, A.-H., Barthe, G., Schölkopf, B., & Valera, I. (2021). A survey of algorithmic recourse: Definitions, formulations, solutions, and prospects (arXiv: 2010. 04050). arXiv. http:// arxiv. org/ abs/ 2010. 04050
- Karimi, A.-H., Schölkopf, B., & Valera, I. (2020). Algorithmic recourse: From counterfactual explanations to interventions (arXiv: 2002. 06278). https:// doi. org/ 10. 48550/ arXiv. 2002. 06278
- Kohler-Hausmann, I. (2019). Eddie Murphy and the dangers of counterfactual causal thinking about detect- ing racial discrimination. Northwestern University Law Review, 113(5), 1163-1227.
- Leben, D. (2023). Explainable AI as evidence of fair decisions. Frontiers in Psychology. https:// doi. org/ 10. 3389/ fpsyg. 2023. 10694 26
- Lundberg, S. (2018). Explaining quantitative measures of fairness-SHAP latest documentation. https:// shap. readt hedocs. io/ en/ latest/ examp le_ noteb ooks/ overv iews/ Expla ining% 20qua ntita tive% 20mea sures% 20of% 20fai rness. html
- Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Advances in neural information processing systems, 2017 (Vol. 30). https:// papers. nips. cc/ paper/ 2017/ hash/ 8a20a 86219 78632 d76c4 3dfd2 8b677 67-Abstr act. html
- Mitchell, S., Potash, E., Barocas, S., D'Amour, A., & Lum, K. (2021). Prediction-based decisions and fair- ness: A catalogue of choices, assumptions, and definitions. Annual Review of Statistics and Its Applica- tion, 8(1), annurev-statistics-042720-125902. https:// doi. org/ 10. 1146/ annur ev-stati stics-042720-125902
- Mokander, J., & Floridi, L. (2021). Ethics-based auditing to develop trustworthy AI. SSRN Scholarly Paper ID 3788841. Social Science Research Network. https:// papers. ssrn. com/ abstr act= 37888 41
- Mökander, J., Morley, J., Taddeo, M., & Floridi, L. (2021). Ethics-based auditing of automated decision- making systems: Nature, scope, and limitations. Science and Engineering Ethics, 27(4), 44. https:// doi. org/ 10. 1007/ s11948-021-00319-4
- Nabi, R., & Shpitser, I. (2018). Fair inference on outcomes (arXiv: 1705. 10378). http:// arxiv. org/ abs/ 1705. 10378
- Nannini, L., Balayn, A., & Smith, A. L. (2023). Explainability in AI policies: A critical review of communi- cations, reports, regulations, and standards in the EU, US, and UK (arXiv: 2304. 11218). https:// doi. org/ 10. 48550/ arXiv. 2304. 11218
- Pearl, J. (2000). Causality: Models, reasoning, and inference. Cambridge University Press.
- Pearl, J. (2009). Causality (2nd ed.). Cambridge University Press. https:// doi. org/ 10. 1017/ CBO97 80511 803161
- Perrino, J. (2020, July 2). "Redlining" and health indicators: Decisions made 80 years ago have health con- sequences today. NCRC. https:// ncrc. org/ redli ning-and-health-indic ators-decis ions-made-80-years-ago- have-health-conse quenc es-today/
- Prince, A. E. R., & Schwarcz, D. (2019). Proxy discrimination in the age of artificial intelligence and big data. Iowa Law Review, 105, 1257.
- Roberts, H., Ziosi, M., Osborne, C., Saouma, L., Belias, A., Buchser, M., Casovan, A., Kerry, C., Meltzer, J., Mohit, S., Ouimette, M.-E., Renda, A., Stix, C., Teather, E., Woodhouse, R., & Zeng, Y. (2023). A com- parative framework for AI regulatory policy. https:// ceimia. org/ wp-conte nt/ uploa ds/ 2023/ 02/ Compa rative-Frame work-for-AI-Regul atory-Policy. pdf
- Rueda, J., Delgado, J., Parra Jounou, I., Hortal Carmona, J., Ausín, T., & Rodríguez-Arias, D. (2022). "Just" accuracy? Procedural fairness demands explainability in AI-based medical resource allocations. AI and Society. https:// doi. org/ 10. 1007/ s00146-022-01614-9
- Shapley, L. S. (1951). A value for N-person games. RAND Corporation. https:// www. rand. org/ pubs/ papers/ P295. html
- Solon, B. (Director). (2022, August 19). SRA22 Day 3-Keynote talk with Solon Barocas. https:// www. youtu be. com/ watch?v= Ft5rK 1tTYyw
- Štrumbelj, E., & Kononenko, I. (2014). Explaining prediction models and individual predictions with fea- ture contributions. Knowledge and Information Systems, 41(3), 647-665. https:// doi. org/ 10. 1007/ s10115-013-0679-x
- Sundararajan, M., & Najmi, A. (2020). The many Shapley values for model explanation. In Proceedings of the 37th international conference on machine learning, 2020 (pp. 9269-9278). https:// proce edings. mlr. press/ v119/ sunda raraj an20b. html
- Venkatasubramanian, S., & Alfano, M. (2020). The philosophical basis of algorithmic recourse. In Proceed- ings of the 2020 conference on fairness, accountability, and transparency, 2020 (pp. 284-293). https:// doi. org/ 10. 1145/ 33510 95. 33728 76
- Verma, S., Dickerson, J., & Hines, K. (2020). Counterfactual explanations for machine learning: A review. arXiv: 2010. 10596 [Cs, Stat]. http:// arxiv. org/ abs/ 2010. 10596
- Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law and Technology, 31(2), 841-888.
- Wachter, S., Mittelstadt, B., & Russell, C. (2021). Bias preservation in machine learning: The legality of fair- ness metrics under EU Non-Discrimination Law (SSRN Scholarly Paper ID 3792772). Social Science Research Network. https:// doi. org/ 10. 2139/ ssrn. 37927 72
- Wallin, D. E. (1992). Legal recourse and the demand for auditing. The Accounting Review, 67(1), 121-147.
- Wang, J., Wiens, J., & Lundberg, S. (2021). Shapley flow: A graph-based approach to interpreting model predictions (arXiv: 2010. 14592). https:// doi. org/ 10. 48550/ arXiv. 2010. 14592
- Zhou, J., Chen, F., & Holzinger, A. (2022). Towards explainability for AI fairness. In A. Holzinger, R. Goe- bel, R. Fong, T. Moon, K.-R. Müller & W. Samek (Eds.), xxAI-Beyond explainable AI: International workshop, held in conjunction with ICML 2020: Revised and extended papers, July 18, 2020, Vienna, Austria (pp. 375-386). Springer. https:// doi. org/ 10. 1007/ 978-3-031-04083-2_ 18