Putting AI ethics to work: are the tools fit for purpose?
AI and Ethics
https://doi.org/10.1007/S43681-021-00084-XAbstract
Bias, unfairness and lack of transparency and accountability in Artificial Intelligence (AI) systems, and the potential for the misuse of predictive models for decision-making have raised concerns about the ethical impact and unintended consequences of new technologies for society across every sector where data-driven innovation is taking place. This paper reviews the landscape of suggested ethical frameworks with a focus on those which go beyond high-level statements of principles and offer practical tools for application of these principles in the production and deployment of systems. This work provides an assessment of these practical frameworks with the lens of known best practices for impact assessment and audit of technology. We review other historical uses of risk assessments and audits and create a typology that allows us to compare current AI ethics tools to Best Practices found in previous methodologies from technology, environment, privacy, finance and engineering. We ana...
References (134)
- Assessment List for Trustworthy Arti- ficial Intelligence (ALTAI) for self- assessment 2020 EU HLEG AI European Commission https:// futur ium. ec. europa. eu/ en/ europ ean-ai-allia nce/ pages/ altai-asses sment-list-trust worthy- artifi cial-intel ligen ce 30/08/2020 25 Closing the AI Accountability Gap: Defining an End-to- End Framework for Internal Algorithmic Auditing 2020 Raji, Inioluwa Deborah; Smart, Andrew; White, Rebecca N; Mitchell, Marga- ret; Gebru, Tim- nit; Hutchinson, Ben; Smith-Loud, Jamila; Theron, Daniel; Barnes, Parker FAT*'20 Barcelona https:// dl. acm. org/ doi/ pdf/ 10. 1145/ 33510 95. 33728 73 16/11/2020
- Co-Designing Check- lists to Understand Organizational Challenges and Opportunities around Fairness in AI 2020 Madaio, Michael A.; Stark, Luke; Wortman Vaughan, Jen- nifer; Wallach, Hanna Proceedings of the 2020 CHI Conference on Human Factors in Com- puting Systems https:// doi. org/ 10. 1145/ 33138 31. 33764 45 08/10/2020 27 Corporate Digital Responsibility 2020 Lobschat, Lara; Mueller, Ben- jamin; Eggers, Felix; Brandima- rte, Laura; Die- fenbach, Sarah; Kroschke, Mirja; Wirtz, Jochen 29 Datasheets for Data- sets 2020 Gebru, Timnit; Morgenstern, Jamie; Vecchione, Briana; Vaughan, Jennifer Wortman; Wallach, Hanna;
- Daumé III, Hal;
- Crawford, Kate arXiv:1803.09010 [cs] arXiv:1803.09010 [cs] 12/06/2020
- Empowering AI Lead- ership 2020 World Economic Forum World Economic Forum https:// spark. adobe. com/ page/ RsXNk ZANwM LEf/ 30/09/2020
- 31 Fairlearn: A toolkit for assessing and improving fairness in AI 2020 Bird, Sarah; Dudík, Miroslav; Edgar, Richard; Horn, Brandon; Lutz, Roman; Milan, Vanessa; Sameki, Mehrnoosh; Wallach, Hanna;
- Walker, Kathleen; Design, Allovus IBM https:// www. micro soft. com/ en-us/ resea rch/ uploa ds/ prod/ 2020/ 05/ Fairl earn_ White Paper-2020-09-22. pdf 13/10/2020 32 IEEE Draft Model Process for Address- ing Ethical Concerns During System Design P7000/D3 2020 IEEE Standards Association IEEE https:// stand ards. ieee. org/ proje ct/ 7000. html 04/06/2020 33 IEEE Recommended Practice for Assess- ing the Impact of Autonomous and Intelligent Systems on Human Well- Being Std 7010 2020 IEEE Standards Association IEEE https:// stand ards. ieee. org/ indus try-conne ctions/ ec/ auton omous-syste ms. html 30/08/2020 34 Responsible AI 2020 TensorFlow Tensorflow.org https:// inter- ventions for algorith- mic equity: lessons from the field 2020 Katell, Michael;
- Young, Meg; Dailey, Dharma;
- Herman, Berne- ase; Guetler, Viv- ian; Tam, Aaron;
- Binz, Corinne; Raz, Daniella;
- Krafft, P. M Proceedings of the 2020 Conference on Fairness, Accountability, and Trans- parency https:// doi. org/ 10. 1145/ 33510 95. 33728 74 28/01/2020 37 Value-based Engineer- ing for Ethics by Design 2020 Spiekermann, Sarah; Winkler, Till IEEE pre-print arXiv:2004.13676 [cs] 06/10/2020 38 Welcome to the Arti- ficial Intelligence Incident Database 2020 Partnership on AI The Partnership on AI https:// incid entda tabase. ai/ 21/11/2020
- 39 White Paper on Data Ethics in Public Procurement of AI- based Services and Solutions 2020 Hasselbalch, Gry;
- Olsen, B; Tran- berg, P DataEthics.eu https:// datae thics. eu/ wp- conte nt/ uploa ds/ datae thics-white paper-april- 2020. pdf 25/08/2020
- Diakopoulos, N.: Accountability in algorithmic decision making. Commun. ACM 59(2), 56-62 (2016). https:// doi. org/ 10. 1145/ 28441 10
- Eubanks, V.: Automating inequality: how high-tech tools profile, police, and punish the poor. St. Martin's Publishing Group (2018)
- Council regulation (EU) 2016/679: On the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/ EC (General Data Protection Regulation). Off. J. L119/1 (2016) Available http:// eur-lex. europa. eu/ legal-conte nt/ EN/ TXT/? uri= urise rv: OJ. L_. 2016. 119. 01. 0001. 01. ENG& toc= OJ:L: 2016: 119: TOC. Accessed 23 Sep. 2017. (Online).
- Hagendorff, T.: The ethics of AI ethics-an evaluation of guide- lines. Minds Mach. 30(1), 99-120 (2020). https:// doi. org/ 10. 1007/ s11023-020-09517-8
- Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI eth- ics guidelines. Nat. Mach. Intell. 1(9), 389-399 (2019). https:// doi. org/ 10. 1038/ s42256-019-0088-2
- Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., Srikumar, M.: Prin- cipled artificial intelligence: mapping consensus in ethical and rights-based approaches to principles for AI. In: Social Science Research Network, Rochester, NY, SSRN Scholarly Paper ID 3518482 (2020). Available https:// papers. ssrn. com/ abstr act= 35184 82. Accessed 27 Jan. 2020. (Online)
- Morley, J., Floridi, L., Kinsey, L., Elhalal, A.: From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Sci. Eng. Eth- ics (2019). https:// doi. org/ 10. 1007/ s11948-019-00165-5
- Solove, D.J.: A taxonomy of privacy. Univ. Pa Law Rev. 154(3), 477 (2006). https:// doi. org/ 10. 2307/ 40041 279
- Citron. D.K., Solove, D.J.: Privacy harms. In: Social Science Research Network, Rochester, NY, SSRN Scholarly Paper ID 3782222 (2021). https:// doi. org/ 10. 2139/ ssrn. 37822 22.
- Mittelstadt, B.D., Allo, P., Taddeo, M., Wachter, S., Floridi, L.: The ethics of algorithms: mapping the debate. Big Data Soc. 3(2), 2053951716679679 (2016). https:// doi. org/ 10. 1177/ 20539 51716 679679
- Hirsch, D., Bartley, T., Chandrasekaran, A., Norris, D., Par- thasarathy, S., Turner, P. N.: Business data ethics: emerging trends in the governance of advanced analytics and AI. In: The Ohio State University, Ohio State Legal Studies Research Paper No. 628, 2020. Available https:// cpb-us-w2. wpmuc dn. com/u. osu. edu/ dist/3/ 96132/ files/ 2020/ 10/ Final-Report-1. pdf. (Online)
- Solove, D.J.: Privacy and power: computer databases and meta- phors for information privacy. Stanford Law Rev. 53, 71 (2001)
- Raab, C.D.: Information privacy, impact assessment, and the place of ethics. Comput. Law Secur. Rev. 37, 105404 (2020). https:// doi. org/ 10. 1016/j. clsr. 2020. 105404
- Greene, D., Hoffmann, A.L., Stark, L.: Better, nicer, clearer, fairer: a critical assessment of the movement for ethical artificial intelligence and machine learning. In: Presented at the Hawaii International Conference on System Sciences (2019). https:// doi. org/ 10. 24251/ HICSS. 2019. 258.
- Kazim, E., Koshiyama, A.: AI assurance processes. In: Social Science Research Network, Rochester, SSRN Scholarly Paper ID 3685087 (2020). https:// doi. org/ 10. 2139/ ssrn. 36850 87.
- Kind, C.: The term 'ethical AI' is finally starting to mean some- thing. VentureBeat (2020). https:// ventu rebeat. com/ 2020/ 08/ 23/ the-term-ethic al-ai-is-final ly-start ing-to-mean-somet hing/. Accessed 23 Aug. 2020
- Ryan, M., Stahl, B.C.: Artificial intelligence ethics guidelines for developers and users: clarifying their content and normative implications. J. Inf. Commun. Ethics Soc. (2020). https:// doi. org/ 10. 1108/ JICES-12-2019-0138
- AlgorithmWatch: "AI Ethics Guidelines Global Inventory by AlgorithmWatch," AI Ethics Guidelines Global Inventory (2020). https:// inven tory. algor ithmw atch. org. Accessed 11 Aug. 2020.
- Schiff, D., Borenstein, J., Biddle, J., Laas, K.: AI ethics in the public, private, and NGO sectors: a review of a global document collection. IEEE Trans. Technol. Soc. (2021). https:// doi. org/ 10. 1109/ TTS. 2021. 30521 27
- Bird, S., et al.: Fairlearn: a toolkit for assessing and improving fairness in AI. Microsoft (2020). Available https:// www. micro soft. com/ en-us/ resea rch/ uploa ds/ prod/ 2020/ 05/ Fairl earn_ White Paper-2020-09-22. pdf. Accessed 13 Oct. 2020. (Online)
- Mitchell, M., et al.: Model cards for model reporting. Proc. Conf. Fairness Account. Transpar. FAT 19, 220-229 (2019). https:// doi. org/ 10. 1145/ 32875 60. 32875 96
- Gebru, T., et al.: "Datasheets for Datasets" (2020). Avail- able http:// arxiv. org/ abs/ 1803. 09010. Accessed 03 Dec. 2020. (Online)
- Crawford, K.: Atlas of AI: power, politics, and the planetary costs of artificial intelligence. Yale University Press, New Haven (2021)
- Morgan, R.K.: Environmental impact assessment: the state of the art. Impact Assess. Proj. Apprais. 30(1), 5-14 (2012). https:// doi. org/ 10. 1080/ 14615 517. 2012. 661557
- Clarke, R.: Privacy impact assessment: Its origins and develop- ment. Comput. Law Secur. Rev. 25(2), 123-135 (2009). https:// doi. org/ 10. 1016/j. clsr. 2009. 02. 002
- Information Commissioner's Office: Data protection impact assessments (2018). https:// ico. org. uk/ for-organ isati ons/ guide- to-the-gener al-data-prote ction-regul ation-gdpr/ accou ntabi lity- and-gover nance/ data-prote ction-impact-asses sments/. Accessed 07 Jun. 2018
- The Danish Institute for Human Rights: Human rights impact assessment guidance and toolbox -road-testing version. The Danish Institute for Human Rights (2016). https:// www. human rights. dk/ busin ess/ tools/ human-rights-impact-asses sment-guida nce-and-toolb ox. Accessed 03 Feb. 2020
- Renn, O.: Risk Governance: Coping with Uncertainty in a Com- plex World. Earthscan (2008)
- Coates, J.F.: Some methods and techniques for comprehensive impact assessment. Technol. Forecast. Soc. Change 6, 341-357 (1974). https:// doi. org/ 10. 1016/ 0040-1625(74) 90035-3
- IAIA: Technology Assessment (2009). https:// www. iaia. org/ wiki-detai ls. php? ID= 26. Accessed 26 Jan. 2021
- Palm, E., Hansson, S.O.: The case for ethical technology assess- ment (eTA). Technol. Forecast. Soc. Change 73(5), 543-558 (2006). https:// doi. org/ 10. 1016/j. techf ore. 2005. 06. 002
- STOA: Centre for AI | Panel for the Future of Science and Tech- nology (STOA) | European Parliament (2021). https:// www. europ arl. europa. eu/ stoa/ en/ centre-for-AI. Accessed 11 Feb. 2021
- Hennen, L.: Why do we still need participatory technology assessment? Poiesis Prax. 9, 27-41 (2012). https:// doi. org/ 10. 1007/ s10202-012-0122-5
- CSPO: Participatory Technology Assessment | CSPO, Consor- tium for Science and Policy Outcomes (2021). https:// cspo. org/ areas-of-focus/ pta/. Accessed 12 Feb. 2021
- Kiran, A., Oudshoorn, N.E.J., Verbeek, P.P.C.C.: Beyond check- lists: toward an ethical-constructive technology assessment. J. Respons Innov. 2(1), 5-19 (2015). https:// doi. org/ 10. 1080/ 23299 460. 2014. 992769
- Suter, G.W., Barnthouse, L.W., O'Neill, R.V.: Treatment of risk in environmental impact assessment. Environ. Manage. 11(3), 295-303 (1987). https:// doi. org/ 10. 1007/ BF018 67157
- UN Environment: "Assessing Environmental Impacts A Global Review Of Legislation-UNEP-WCMC," In: UNEP-WCMC's official website-Assessing Environmental Impacts A Global Review Of Legislation (2018). https:// www. unep-wcmc. org/ asses sing-envir onmen tal-impac ts--a-global-review-of-legis lation. Accessed 12 Feb. 2021.
- Glucker, A.N., Driessen, P.P.J., Kolhoff, A., Runhaar, H.A.C.: Public participation in environmental impact assessment: why, who and how? Environ. Impact Assess. Rev. 43, 104-111 (2013). https:// doi. org/ 10. 1016/j. eiar. 2013. 06. 003
- IMA Europe: "Life Cycle Assessment | IMA Europe," In: Industrial Mineral Association-Europe (2020). https:// www. ima-europe. eu/ eu-policy/ envir onment/ life-cycle-asses sment. Accessed 06 May 2021
- Aven, T.: Risk assessment and risk management: review of recent advances on their foundation. Eur. J. Oper. Res. 253(1), 1-13 (2016). https:// doi. org/ 10. 1016/j. ejor. 2015. 12. 023
- Edwards, M.M., Huddleston, J.R.: Prospects and perils of fis- cal impact analysis. J. Am. Plann. Assoc. 76(1), 25-41 (2009). https:// doi. org/ 10. 1080/ 01944 36090 33104 77
- Pearce, D.W.: Cost-Benefit Analysis, 2nd edn. Macmillan Inter- national Higher Education (2016)
- Kemp, D., Vanclay, F.: Human rights and impact assessment: clarifying the connections in practice. Impact Assess. Proj. Apprais. 31(2), 86-96 (2013). https:// doi. org/ 10. 1080/ 14615 517. 2013. 782978
- Kende-Robbe, C.: Poverty and social impact analysis : linking macroeconomic policies to poverty outcomes: summary of early experiences. IMF (2003). https:// www. imf. org/ en/ Publi catio ns/ WP/ Issues/ 2016/ 12/ 30/ Pover ty-and-Social-Impact-Analy sis- Linki ng-Macro econo mic-Polic ies-to-Pover ty-Outco mes-16248. Accessed 12 Feb. 2021
- Roessler, B.: New ways of thinking about privacy. (2008). https:// doi. org/ 10. 1093/ oxfor dhb/ 97801 99548 439. 003. 0038.
- Westin, A.F.: Privacy and Freedom. Ig Publishing (1967)
- Westin, A.F.: Information Technology in a Democracy. Harvard University Press (1971)
- Stewart, B.: Privacy impact assessments. Priv. Law Policy Rep. 39(4), 1996 (2021). Available http:// www. austl ii. edu. au/ au/ journ als/ PLPR/ 1996/ 39. html Accessed: Feb. 17, 2021. [Online]
- Financial Reporting Council: Auditors I Audit and Assurance I Standards and Guidance for Auditors I Financial Reporting Council (2020). https:// www. frc. org. uk/ audit ors/ audit-assur ance/ stand ards-and-guida nce. Accessed 26 Apr. 2021
- Rusby, R.: The interpretation and evaluation of assurance cases. In: Computer Science Laboratory, SRI International, Menlo Park CA 94025, USA, Technical Report SRI-CSL-15-01 (2015)
- Bloomfield, R., Khlaaf, H., Conmy, P.R., Fletcher, G.: Disruptive innovations and disruptive assurance: assuring machine learning and autonomy. Computer 52(9), 82-89 (2019). https:// doi. org/ 10. 1109/ MC. 2019. 29147 75
- International Organization for Standardization: "ISO-Standards," ISO (2021). https:// www. iso. org/ stand ards. html. Accessed 15
- International Organization for Standardization: "ISO-Certi- fication," ISO (2021). https:// www. iso. org/ certi ficat ion. html. Accessed 15 Jul. 2021
- PwC UK: "Understanding a financial statement audit," Price- waterhouseCooper, UK, (2013). Available https:// www. pwc. com/ gx/ en/ audit-servi ces/ publi catio ns/ assets/ pwc-under stand ing-finan cial-state ment-audit. pdf. (Online)
- Brundage, M., et al.: Toward trustworthy AI development: mech- anisms for supporting verifiable claims (2020). Available http:// arxiv. org/ abs/ 2004. 07213. Accessed 16 Nov. 2020. (Online).
- Mökander, J., Floridi, L.: Ethics-based auditing to develop trustworthy AI. Minds Mach. (2021). https:// doi. org/ 10. 1007/ s11023-021-09557-8
- Starr, C.: Social benefit versus technological risk. Science 165(3899), 1232-1238 (1969)
- Thompson, K.M., Deisler, P.F., Schwing, R.C.: Interdisciplinary vision: the first 25 years of the society for risk analysis (SRA), 1980-2005. Risk Anal. 25(6), 1333-1386 (2005). https:// doi. org/ 10. 1111/j. 1539-6924. 2005. 00702.x
- Beck, P.U.: Risk Society: Towards a New Modernity. SAGE (1992)
- Moses, K., Malone, R.: Development of risk assessment matrix for NASA Engineering and safety center NASA technical reports server (NTRS). In: NASA Technical Reports Server (NTRS) (2004). https:// ntrs. nasa. gov/ citat ions/ 20050 123548. Accessed
- Hayne, C., Free, C.: Hybridized professional groups and institu- tional work: COSO and the rise of enterprise risk management. Account. Organ. Soc. 39(5), 309-330 (2014). https:// doi. org/ 10. 1016/j. aos. 2014. 05. 002
- Lauterbach, A., Bonime, A.: Environmental risk social risk gov- ernance risk. Risk Manage, 3 (2018).
- Floridi, L., et al.: AI4People-an ethical framework for a good AI society: opportunities, risks, principles, and recommenda- tions. Minds Mach. 28(4), 689-707 (2018). https:// doi. org/ 10. 1007/ s11023-018-9482-5
- High Level Expert Group on AI: "Ethics guidelines for trustwor- thy AI," European Commission, Brussels, Text (2019). Available https:// ec. europa. eu/ digit al-single-market/ en/ news/ ethics-guide lines-trust worthy-ai. Accessed 23 May 2019. (Online)
- Freeman, R.E.: Strategic Management: A Stakeholder Approach. Cambridge University Press (2010)
- Donaldson, T., Preston, L.E.: The stakeholder theory of the cor- poration: concepts, evidence, and implications. Acad. Manage. Rev. 20(1), 65 (1995). https:// doi. org/ 10. 2307/ 258887
- Business Roundtable: "Our Commitment," Business Roundta- ble-Opportunity Agenda (2020). https:// oppor tunity. busin essro undta ble. org/ ourco mmitm ent/. Accessed 05 Feb. 2021
- TensorFlow: "Responsible AI," TensorFlow (2020). https:// www. tenso rflow. org/ resou rces/ respo nsible-ai. Accessed 02 Nov. 2020
- Bantilan, N.: Themis-ml: a fairness-aware machine learning interface for end-to-end discrimination discovery and mitigation. (2017). Available http:// arxiv. org/ abs/ 1710. 06921. Accessed 13 Nov. 2020. (Online)
- Bellamy, R. K. E., et al.: AI Fairness 360: an extensible toolkit for detecting, understanding, and mitigating unwanted algorith- mic bias. (2018). Available http:// arxiv. org/ abs/ 1810. 01943. Accessed 27 May 2021. (Online)
- Lee, M.S.A., Floridi, L., Singh, J.: Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and wel- fare economics. AI Ethics (2021). https:// doi. org/ 10. 1007/ s43681-021-00067-y
- Hutchinson, B., Mitchell, M.: 50 years of test (Un)fairness: les- sons for machine learning. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, New York, pp. 49-58 (2019). https:// doi. org/ 10. 1145/ 32875 60. 32876 00.
- Veale, M., Van Kleek, M., Binns, R.: Fairness and accountability design needs for algorithmic support in high-stakes public sec- tor decision-making. In: Proceedings of the 2018 CHI Confer- ence on Human Factors in Computing Systems, New York, p. 440:1-440:14 (2018). https:// doi. org/ 10. 1145/ 31735 74. 31740 14.
- Hoffmann, A.L.: Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Inf. Commun. Soc. 22(7), 900-915 (2019). https:// doi. org/ 10. 1080/ 13691 18X. 2019. 15739 12
- Radford, J., Joseph, K.: Theory in theory out: the uses of social theory in machine learning for social science. Front. Big Data. (2020). https:// doi. org/ 10. 3389/ fdata. 2020. 00018
- Institute for the Future and Omidyar Network, "Ethical OS," (2018). https:// ethic alos. org/. Accessed 21 Jun. 2019
- Doteveryone, Consequence Scanning-doteveryone (2019). https:// dotev eryone. org. uk/ proje ct/ conse quence-scann ing/. Accessed 18 Jun 2019
- Madaio, M. A., Stark, L., Wortman Vaughan, J., Wallach, H.: Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, pp. 1-14 (2020). https:// doi. org/ 10. 1145/ 33138 31. 33764 45.
- Stephanidis, C., et al.: Seven HCI grand challenges. Int. J. Hum Comput Interact. 35(14), 1229-1269 (2019). https:// doi. org/ 10. 1080/ 10447 318. 2019. 16192 59
- Krippendorff, K.: Content analysis. In: International encyclo- pedia of communication, vol. 1. Oxford University Press, New York, pp 8 (1989). Available http:// repos itory. upenn. edu/ asc_ papers/ 22. Accessed 08 Jul 2020 (Online)
- Smith, K.B.: Typologies, taxonomies, and the benefits of policy classification. Policy Stud. J. 30(3), 379-395 (2002). https:// doi. org/ 10. 1111/j. 1541-0072. 2002. tb021 53.x
- Singh, A., et al.: PriMP visualization-principled artificial intel- ligence project. In: Harvard Law School, Berkman Klein Center for Internet and Society (2018). https:// ai-hr. cyber. harva rd. edu/ primp-viz. html. Accessed 24 Jun. 2019
- Birhane, A., Kalluri, P., Card, D., Agnew, W., Dotan, R., Bao, M.: The values encoded in machine learning research (2021). Available http://arxiv.org/abs/2106.15590. Accessed 25 Jul. 2021 (Online)
- International Standardization Organisation: "ISO 14001:2015,". ISO (2021). https:// www. iso. org/ cms/ render/ live/ en/ sites/ isoorg/ conte nts/ data/ stand ard/ 06/ 08/ 60857. html. Accessed 26 Jul. 2021
- Bengtsson, M.: How to plan and perform a qualitative study using content analysis. NursingPlus Open 2, 8-14 (2016). https:// doi. org/ 10. 1016/j. npls. 2016. 01. 001
- Whittlestone, J., Nyrup, R., Alexandrova, A., Cave, S.: The role and limits of principles in AI ethics: towards a focus on tensions. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Eth- ics, and Society, Honolulu, pp. 195-200 (2019). https:// doi. org/ 10. 1145/ 33066 18. 33142 89.
- Clarke, T.: Accounting for Enron: shareholder value and stake- holder interests. Corp. Gov. Int. Rev. 13(5), 598-612 (2005). https:// doi. org/ 10. 1111/j. 1467-8683. 2005. 00454.x
- du Plessis, J.J., Hargovan, A., Harris, J.: Principles of Contempo- rary Corporate Governance. Cambridge University Press (2018)
- Freeman, R. E.: Strategic management: a stakeholder approach. Pitman (1984)
- Foden, C.: Our structure. City of Lincoln Council (2019). https:// www. linco ln. gov. uk/ counc il/ struc ture. Accessed 10 Jan. 2021
- Stanley, M.: UK Civil Service-Grades and Roles. In: Under- standing Government (2020). https:// www. civil serva nt. org. uk/ infor mation-grades_ and_ roles. html. Accessed 10 Jan. 2021
- National Crime Agency: Our leadership. In: National Crime Agency (2021). https:// www. natio nalcr imeag ency. gov. uk/ who- we-are/ our-leade rship. Accessed 10 Jan. 2021
- Badr, W.: Evaluating machine learning models fairness and bias. Medium (2019). https:// towar dsdat ascie nce. com/ evalu ating-machi ne-learn ing-models-fairn ess-and-bias-4ec82 512f7 c3. Accessed 13 Nov. 2020
- Kaissis, G.A., Makowski, M.R., Rückert, D., Braren, R.F.: Secure, privacy-preserving and federated machine learning in medical imaging. Mach. Intell Nat. (2020). https:// doi. org/ 10. 1038/ s42256-020-0186-1
- Chapman, A., Missier, P., Simonelli, G., Torlone, R.: Capturing and querying fine-grained provenance of preprocessing pipelines in data science. Proc. VLDB Endow. 14(4), 507-520 (2020). https:// doi. org/ 10. 14778/ 34369 05. 34369 11
- Information Commissioner's Office: Guidance on the AI auditing framework Draft guidance for consultation p. 105 (2020)
- Mayring, P.: Qualitative content analysis: demarcation, varieties, developments. Forum Qual. Sozialforschung Forum Qual. Soc. Res. (2019). https:// doi. org/ 10. 17169/ fqs-20.3. 3343
- Carrier, R., Brown, S.: Taxonomy: AI Audit, Assurance, and Assessment. For Humanity (2021). https:// forhu manity. center/ blog/ taxon omy-ai-audit-assur ance-and-asses sment. Accessed 26
- Ada Lovelace Institute and Data Kind UK: Examining the black box: tools for assessing algorithmic systems (2020). https:// www. adalo velac einst itute. org/ report/ exami ning-the-black-box-tools- for-asses sing-algor ithmic-syste ms/. Accessed 23 Feb. 2021
- Patton, M.Q.: Qualitative Research and Evaluation Methods: Integrating Theory and Practice. SAGE Publications (2014)
- Krippendorff, K.: Content Analysis: An Introduction to Its Meth- odology. SAGE (2013)
- Lee, M. S. A., Singh, J.: The landscape and gaps in open source fairness toolkits. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, New York, pp. 1-13 (2021) https:// doi. org/ 10. 1145/ 34117 64. 34452 61.
- Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. 54(6), 115:1-115:35 (2021). https:// doi. org/ 10. 1145/ 34576 07
- Vakkuri, V., Kemell, K.-K., Kultanen, J., Siponen, M., Abra- hamsson, P.: Ethically aligned design of autonomous systems: industry viewpoint and an empirical study, p. 18 (2019)
- Mulgan, G.: AI ethics and the limits of code(s). In: nesta (2019). https:// www. nesta. org. uk/ blog/ ai-ethics-and-limits-codes/. Accessed 16 Sep. 2019
- Floridi, L.: Why Information Matters. In: The New Atlantis (2017). http:// www. thene watla ntis. com/ publi catio ns/ why-infor mation-matte rs. Accessed 14 Oct 2020
- Kitchin, R.: The ethics of smart cities (2019). Available https:// www. rte. ie/ brain storm/ 2019/ 0425/ 10456 02-the-ethics-of-smart- cities/. Accessed 07 May 2019 (Online)
- Bietti, E.: From ethics washing to ethics bashing: a view on tech ethics from within moral philosophy. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Bar- celona, Spain, pp 210-219 (2020) https:// doi. org/ 10. 1145/ 33510 95. 33728 60.
- Metcalf, J., Moss, E., Watkins, E. A., Singh, R., Elish, M. C.: Algorithmic impact assessments and accountability: the co- construction of impacts, p 19 (2021)
- European Commission: Proposal for a Regulation laying down harmonised rules on artificial intelligence | Shaping Europe's digital future. In: European Commission, Brussels, Proposal (2021). Available https:// digit al-strat egy. ec. europa. eu/ en/ libra ry/ propo sal-regul ation-laying-down-harmo nised-rules-artifi cial- intel ligen ce. Accessed 21 May 2021 (Online)
- Webster, G.: Translation: personal information protection law of the People's Republic of China (Draft) (Second Review Draft)
- | DigiChina. In: Stanford DigiChina Cyber Policy Unit (2021). https:// digic hina. stanf ord. edu/ news/ trans lation-perso nal-infor mation-prote ction-law-peopl es-repub lic-china-draft-second- review. Accessed 21 May 2021
- Lee, A., Sacks, S., Creemers, R., Shi, M., Webster, G.: China's draft privacy law adds platform self-governance, solidifies CAC's Role | DigiChina. In: Stanford DigiChina Cyber Policy Unit (2021). https:// digic hina. stanf ord. edu/ news/ chinas-draft- priva cy-law-adds-platf orm-self-gover nance-solid ifies-cacs-role. Accessed 21 May 2021
- Jillson, E.: Aiming for truth, fairness, and equity in your com- pany's use of AI. In: Federal Trade Commission (2021). https:// www. ftc. gov/ news-events/ blogs/ busin ess-blog/ 2021/ 04/ aiming- truth-fairn ess-equity-your-compa nys-use-ai. Accessed 20 Apr. 2021
- Bryson, J.J.: The artificial intelligence of the ethics of artificial intelligence: an introductory overview for law and regulation. In: Dubber, M.D., Pasquale, F., Das, S. (eds.) The Oxford Handbook of Ethics of AI, pp. 1-25. Oxford University Press (2020)
- CDEI: Types of assurance in AI and the role of standards. In: Centre for Data Ethics and Innovation Blog (2021). https:// cdei. blog. gov. uk/ 2021/ 04/ 17/ 134/. Accessed 26 May 2021
- European Parliament: The adequate protection of personal data by the United Kingdom (2021) https:// www. europ arl. europa. eu/ doceo/ docum ent/ TA-9-2021-0262_ EN. html. Accessed 26 May 2021.
- Simonsen, J., Robertson, T.: Routledge International Handbook of Participatory Design. Routledge, London (2012)
- Beck, E.: P for Political: Participation is not enough. Scand. J. Inf. Syst. 14(1) (2002). Available at https:// aisel. aisnet. org/ sjis/ vol14/ iss1/1. (Online)
- Thuermer, G., Walker, J., Simperl, E., Carr, L.: When data meets citizens: an investigation of citizen engagement in data-driven innovation programmes. In: Presented at the 2nd Data Justice Conference, Cardiff University Online (2021)
- Sloane, M., Moss, E., Awomolo, O., Forlano, L.: Participation is not a design fix for machine learning (2020). Available http:// arxiv. org/ abs/ 2007. 02423. Accessed 26 May 2021. (Online)