Towards Transparency by Design for Artificial Intelligence
2020, Science and Engineering Ethics
https://doi.org/10.1007/S11948-020-00276-4Abstract
In this article, we develop the concept of Transparency by Design that serves as practical guidance in helping promote the beneficial functions of transparency while mitigating its challenges in automated-decision making (ADM) environments. With the rise of artificial intelligence (AI) and the ability of AI systems to make automated and self-learned decisions, a call for transparency of how such systems reach decisions has echoed within academic and policy circles. The term transparency, however, relates to multiple concepts, fulfills many functions, and holds different promises that struggle to be realized in concrete applications. Indeed, the complexity of transparency for ADM shows tension between transparency as a normative ideal and its translation to practical application. To address this tension, we first conduct a review of transparency, analyzing its challenges and limitations concerning automated decision-making practices. We then look at the lessons learned from the development of Privacy by Design, as a basis for developing the Transparency by Design principles. Finally, we propose a set of nine principles to cover relevant con-textual, technical, informational, and stakeholder-sensitive considerations. Transparency by Design is a model that helps organizations design transparent AI systems, by integrating these principles in a step-by-step manner and as an ex-ante value, not as an afterthought.
References (120)
- Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. In ACM proceedings of the 2018 CHI conference on human factors in computing systems (Vol. 582, pp. 1-18).
- Abràmoff, M. D., Lavin, P. T., Birch, M., Shah, N., & Folk, J. C. (2018). Pivotal trial of an autono- mous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. Nature Digital Medicine. Retrieved June 29, 2020, from https ://www.natur e.com/artic les/s4174 6-018-0040-6.
- ACM (2017). Statement on algorithmic transparency and accountability. Retrieved January 10, 2020, from https ://www.acm.org/binar ies/conte nt/asset s/publi c-polic y/2017_usacm _state ment_algor ithms .pdf.
- Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on Explainable Artificial Intel- ligence (XAI). IEEE Access, 6, 52138-52160.
- Albu, O. B., & Flyverbom, M. (2019). Organizational transparency: Conceptualizations, conditions, and consequences. Business and Society, 58(2), 68-297.
- AlgorithmWatch. (2019). Automating society: Taking stock of automated decision-making in the EU. Retrieved June 29, 2020, from https ://algor ithmw atch.org/wp-conte nt/uploa ds/2019/01/Autom ating _Socie ty_Repor t_2019.pdf.
- Altman, I. (1975). The environment and social behavior: Privacy, personal space, territory, and crowd- ing. Monterey, California: Brooks/Cole Publishing Company.
- Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973-989.
- Araujo, T., Helberger, N., Kruikemeier, S., & De Vreese, C. H. (2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & SOCIETY, 35, 611-623.
- Bahner, J. E., Hüper, A. D., & Manzey, D. (2008). Misuse of automated decision aids: Complacency, automation bias and the impact of training experience. International Journal of Human-Computer Studies, 66(9), 688-699.
- Beauchamp, T. L., & Childress, J. F. (2001). Principles of biomedical ethics (7th ed.). Oxford: Oxford University Press.
- Ben-Shahar, O., & Schneider, C. E. (2011). The failure of mandated disclosure. University of Pennsylva- nia Law Review, 159, 652-743.
- Ben-Shahar, O., & Schneider, C. E. (2014). More than you wanted to know: The failure of mandated dis- closure. Princeton: Princeton University Press.
- Berglund, T. (2014). Corporate governance and optimal transparency. In J. Forssbaeck & L. Oxelheim (Eds.), The Oxford handbook of economic and institutional transparency (pp. 359-370). Oxford: Oxford University Press.
- Bhattacherjee, A. (2001). Understanding information systems continuance: An expectation-confirmation model. MIS Quarterly, 351-370.
- Bishop, S. (2018). Anxiety, panic and self-optimization: Inequalities and the YouTube algorithm. Con- vergence, 24(1), 69-84.
- Bovens, M. (2007). Analysing and assessing accountability: A conceptual framework. European Law Journal, 13(4), 447-468.
- Bozdag, E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information Technology, 15(3), 209-227.
- Brayne, S. (2017). Big data surveillance: The case of policing. American Sociological Review, 82(5), 977-1008.
- Brayne, S., & Christin, A. (2020). Technologies of crime prediction: The reception of algorithms in policing and criminal courts. Social Problems. https ://doi.org/10.1093/socpr o/spaa0 04.
- Brey, P. (2010). Values in technology and disclosive computer ethics. The Cambridge Handbook of Infor- mation and Computer Ethics, 4, 41-58.
- Büchi, M., Fosch-Villaronga, E., Lutz, C., Tamò-Larrieux, A., Velidi, S., & Viljoen, S. (2019). The chill- ing effects of algorithmic profiling: Mapping the issues. Computer Law & Security Review, 36, 105367.
- Buhmann, A., Paßmann, J., & Fieseler, C. (2019). Managing algorithmic accountability: Balancing repu- tational concerns, engagement strategies, and the potential of rational discourse. Journal of Busi- ness Ethics. https ://doi.org/10.1007/s1055 1-019-04226 -4.
- Burrell, J. (2016). How the machine 'thinks': Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1-12.
- Calo, R. (2011). Against notice skepticism in privacy (and elsewhere). Notre Dame Law Review, 87(3), 1027-1072.
- Campolo, A., & Crawford, K. (2020). Enchanted determinism: Power without responsibility in artificial intelligence. Engaging Science, Technology, and Society, 6, 1-19.
- Carlsson, B. (2014). Transparency of innovation policy. In J. Forssbaeck & L. Oxelheim (Eds.), The Oxford handbook of economic and institutional transparency (pp. 219-238). Oxford: Oxford Uni- versity Press.
- Casey, B., Farhangi, A., & Vogl, R. (2019). Rethinking explainable machines: The GDPR's right to explanation debate and the rise of algorithmic audits in enterprise. Berkeley Technology Law Jour- nal, 34(1), 143-188.
- Cavoukian, A. (2009). Privacy by design: The 7 foundational principles. Retrieved January 10, 2020, from Privacy by Design-Foundational Principles.
- Cavoukian, A., Shapiro, S., & Cronk, R. J. (2014). Privacy engineering: Proactively embedding privacy, by design. Office of the Information and Privacy Commissioner. Retrieved January 10, 2020, from https ://www.ipc.on.ca/wp-conte nt/uploa ds/resou rces/pbd-priv-engin eerin g.pdf.
- Coeckelbergh, M. (2020). Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and Engineering Ethics, 26(4), 2051-2068.
- d'Aquin, M., Troullinou, P., O'Connor, N. E., Cullen, A., Faller, G., & Holden, L. (2018). Towards an "Ethics by Design" methodology for AI research projects. In Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society (pp. 54-59).
- De Laat, P. B. (2018). Algorithmic decision-making based on machine learning from Big Data: Can transparency restore accountability? Philosophy & Technology, 31(4), 525-541.
- Dennedy, M., Fox, J., & Finneran, T. (2014). The privacy engineer's manifesto: Getting from policy to code to QA to value. New York: Apress.
- Diakopoulos, N. (2016). Accountability in algorithmic decision-making: A view from computational journalism. Communications of the ACM, 59(2), 56-62.
- Dignum, V., Baldoni, M., Baroglio, C., Caon, M., Chatila, R., Dennis, L., Génova, G., Haim, G., Kließ, M.S., Lopez-Sanchez, M., & Micalizio, R. (2018). Ethics by design: Necessity or curse?. In Pro- ceedings of the 2018 AAAI/ACM conference on AI, ethics, and society (pp. 60-66).
- Dourish, P. (2016). Algorithms and their others: Algorithmic culture in context. Big Data & Society, 3(2), 2053951716665128.
- Edwards, L., & Veale, M. (2017). Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for. Duke Law & Technology Review, 16, 18-84.
- Edwards, L., & Veale, M. (2018). Enslaving the algorithm: From a "Right to an Explanation" to a "Right to Better Decisions"? IEEE Security and Privacy, 16(3), 46-54.
- Eisenhardt, K. M. (1989). Agency theory: An assessment and review. Academy of Management Review, 14(1), 57-74.
- Elia, J. (2009). Transparency rights, technology, and trust. Ethics and Information Technology, 11(2), 145-153.
- Eslami, M., Krishna Kumaran, S. R., Sandvig, C., & Karahalios, K. (2018). Communicating algorithmic process in online behavioral advertising. In Proceedings of the 2018 CHI conference on human fac- tors in computing systems (pp. 1-13).
- European Commission. (2020). White paper on artificial intelligence: A European approach to excellence and trust. Retrieved August 19, 2020, from https ://ec.europ a.eu/info/sites /info/files /commi ssion -white -paper -artifi cial -intel ligen ce-feb20 20_en.pdf.
- Felzmann, H., Fosch Villaronga, E., Lutz, C., & Tamò-Larrieux, A. (2019a). Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society, 6(1), 1-14.
- Felzmann, H., Fosch Villaronga, E., Lutz, C., & Tamò-Larrieux, A. (2019b). Robots and transparency: The multiple dimensions of transparency in the context of robot technologies. IEEE Robotics and Automation Magazine, 26(2), 71-78.
- Forssbaeck, J., & Oxelheim, L. (2014). The multifaceted concept of transparency. In J. Forssbaeck & L. Oxelheim (Eds.), The Oxford handbook of economic and institutional transparency (pp. 3-30). Oxford: Oxford University Press.
- Foster, C., & Frieden, J. (2017). Crisis of trust: Socio-economic determinants of Europeans' confidence in government. European Union Politics, 18(4), 511-535.
- Fox, J. (2007). The uncertain relationship between transparency and accountability. Development in Practice, 17(4-5), 663-671.
- Friedman, B., Kahn, P., & Borning, A. (2008). Value sensitive design and information systems. In K. E. Himma & H. T. Tavani (Eds.), The handbook of information and computer ethics (pp. 69-101). Hoboken, NJ: Wiley.
- Fule, P., & Roddick, J. F. (2004). Detecting privacy and ethical sensitivity in data mining results. In Pro- ceedings of the 27th Australasian conference on computer science (Vol. 26, pp. 159-166). Austral- ian Computer Society, Inc.
- Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. New Haven: Yale University Press.
- Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a "right to explanation". AI Magazine, 38(3), 50-57.
- Greiling, D. (2014). Accountability and trust. In M. Bovens, R. E. Goodin, & T. Schillemans (Eds.), The Oxford handbook of public accountability (pp. 617-631). Oxford: Oxford University Press.
- Grimmelikhuijsen, S., Porumbescu, G., Hong, B., & Im, T. (2013). The effect of transparency on trust in government: A cross-national comparative experiment. Public Administration Review, 73(4), 575-586.
- Heald, D. (2006). Varieties of transparency. In C. Hood & D. Heald (Eds.), Transparency: The key to bet- ter governance? (pp. 25-43). London: British Academy Scholarship.
- Hildebrandt, M. (2013). Profile transparency by design? Re-enabling double contingency. In M. Hilde- brandt & K. de Vries (Eds.), Privacy, due process and the computational turn: The philosophy of law meets the philosophy of technology (pp. 221-246). London: Routledge.
- Hirschman, A. O. (1970). Exit, voice, and loyalty: Responses to decline in firms, organizations, and states (Vol. 25). Cambridge: Harvard University Press.
- HLEG AI (High-Level Expert Group on Artificial Intelligence). (2019). Ethics guidelines for trustworthy AI. Retrieved November 11, 2020, from https ://ec.europ a.eu/futur ium/en/ai-allia nce-consu ltati on/ guide lines .
- Hood, C. (2006). Transparency in historical perspective. In C. Hood & D. Heald (Eds.), Transparency: The key to better governance? (pp. 3-23). London: British Academy Scholarship.
- IBM. (2018). Principles for trust and transparency. Retrieved November 11, 2020, from https ://www. ibm.com/blogs /polic y/wp-conte nt/uploa ds/2018/05/IBM_Princ iples _OnePa ge.pdf. Accessed . ICDPPC (International Conference of Data Protection and Privacy Commissioners). (2018). Declara- tion on ethics and data protection in artificial intelligence. Retrieved November 11, 2020, from https ://icdpp c.org/wp-conte nt/uploa ds/2018/10/20180 922_ICDPP C-40th_AI-Decla ratio n_ADOPT ED.pdf. ICO. (2020). What is automated individual decision-making and profiling? Retrieved August 19, 2020, from https ://ico.org.uk/for-organ isati ons/guide -to-data-prote ction /guide -to-the-gener al-data-prote ction -regul ation -gdpr/autom ated-decis ion-makin g-and-profi ling/what-is-autom ated-indiv idual -decis ion-makin g-and-profi ling/.
- IEEE. (2019). Ethically aligned design (version 2). Retrieved November 11, 2020, from https ://stand ards. ieee.org/conte nt/dam/ieee-stand ards/stand ards/web/docum ents/other /ead_v2.pdf.
- Iphofen, R., & Kritikos, M. (2019). Regulating artificial intelligence and robotics: Ethics by design in a digital society. Contemporary Social Science. https ://doi.org/10.1080/21582 041.2018.15638 03.
- Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.
- Jones, K. (1996). Trust as an affective attitude. Ethics, 107(1), 4-25.
- Kaminski, M. E. (2019). The right to explanation, explained. Berkeley Technology Law Journal, 34(1), 189-218.
- Karanasiou, A. P., & Pinotsis, D. A. (2017). A study into the layers of automated decision-making: Emer- gent normative and legal aspects of deep learning. International Review of Law, Computers & Technology, 31(2), 170-187.
- Kemper, J., & Kolkman, D. (2019). Transparent to whom? No algorithmic accountability without a criti- cal audience. Information, Communication & Society, 22(14), 2081-2096.
- Kitchin, R., & Lauriault, T. P. (2014). Towards critical data studies: Charting and unpacking data assem- blages and their work. In The programmable city working paper. Retrived November 11, 2020, from https ://paper s.ssrn.com/sol3/paper s.cfm?abstr act_id = 24741 12.
- Kolkman, D. (2020). The (in) credibility of algorithmic models to non-experts. Information, Communica- tion & Society. https ://doi.org/10.1080/13691 18X.2020.17618 60.
- Kroll, J. A., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2016). Accountable algorithms. University of Pennsylvania Law Review, 165(3), 633-705.
- Kudina, O., & Verbeek, P. P. (2019). Ethics from within: Google Glass, the Collingridge dilemma, and the mediated value of privacy. Science, Technology and Human Values, 44(2), 291-314.
- Kulesza, T., Stumpf, S., Burnett, M., Yang, S., Kwan, I., & Wong, W. K. (2013). Too much, too little, or just right? Ways explanations impact end users' mental models. In 2013 IEEE Symposium on visual languages and human centric computing (pp. 3-10).
- Latonero, M. (2018). Governing artificial intelligence: Upholding human rights & dignity. Retrieved October, 20, 2020, from https ://datas ociet y.net/wp-conte nt/uploa ds/2018/10/DataS ociet y_Gover ning_Artifi cial _Intel ligen ce_Uphol ding_Human _Right s.pdf.
- Leetaru, K. (2018). Without transparency, democracy dies in the darkness of social media. Forbes, 25 January 2020. Retrieved October 20, 2020, from https ://www.forbe s.com/sites /kalev leeta ru/2018/01/25/witho ut-trans paren cy-democ racy-dies-in-the-darkn ess-of-socia l-media /.
- Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology, 31(4), 611-627.
- Mascharka, D., Tran, P., Soklaski, R., & Majumdar, A. (2018). Transparency by design: Closing the gap between performance and interpretability in visual reasoning. In Proceedings of the IEEE confer- ence on computer vision and pattern recognition (pp. 4942-4950).
- Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175-183.
- Meijer, A. (2014). Transparency. In M. Bovens, R. E. Goodin, & T. Schillemans (Eds.), The Oxford hand- book of public accountability (pp. 507-524). Oxford: Oxford University Press.
- Merchant, B. (2019). Tech journalism's 'on background' scourge. Columbia Journalism Review, July 17 2019. Retrieved November 11, 2020, from https ://www.cjr.org/opini on/tech-journ alism -on-backg round .php.
- Microsoft. (2019). Microsoft AI principles. Retrieved November 11, 2020, from https ://www.micro soft. com/en-us/ai/our-appro ach-to-ai.
- Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intel- ligence, 267, 1-38.
- Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Map- ping the debate. Big Data & Society, 3(2), 2053951716679679.
- Mulligan, D. K., & King, J. (2011). Bridging the gap between privacy and design. University of Pennsyl- vania Journal of Constitutional Law, 14, 989-1034.
- Neyland, D. (2016). Bearing account-able witness to the ethical algorithmic system. Science, Technology, & Human Values, 41(1), 50-76. https ://doi.org/10.1177/01622 43915 59805 6.
- Nissenbaum, H. (1996). Accountability in a computerized society. Science and Engineering Ethics, 2(1), 25-42.
- Nissenbaum, H. (2001). How computer systems embody values. Computer, 34(3), 120.
- O'Neill, O. (2002). A question of trust: The BBC Reith Lectures 2002. Cambridge: Cambridge University Press.
- Paal, B. P., & Pauly, D. A. (Eds.). (2018). Datenschutz-Grundverordnung Bundesdatenschutzgesetz. Munich: CH Beck.
- Pasquale, F. (2015). The black box society. Cambridge, MA: Harvard University Press.
- Rader, E., Cotter, K., & Cho, J. (2018). Explanations as mechanisms for supporting algorithmic transpar- ency. In ACM proceedings of the 2018 CHI conference on human factors in computing systems (pp. 1-13).
- Rawlins, B. (2008). Give the emperor a mirror: Toward developing a stakeholder measurement of organi- zational transparency. Journal of Public Relations Research, 21(1), 71-99.
- Ringel, L. (2019). Unpacking the Transparency-Secrecy Nexus: Frontstage and backstage behaviour in a political party. Organization Studies, 40(5), 705-723.
- Roberge, J., & Seyfert, R. (2016). What are algorithmic cultures? In R. Seyfert & J. Roberge (Eds.), Algo- rithmic cultures: essays onmeaning, performance and new technologies (pp. 13-37). Routledge, Taylor & Francis.
- Roberts, S. T. (2019). Behind the screen: Content moderation in the shadows of social media. New Haven: Yale University Press.
- Rosenberg, M. (2019). Ad tool Facebook built to fight disinformation doesn't work as advertised. The New York Times, 25 July 2019. Retrieved November 11, 2020, from https ://www.nytim es.com/2019/07/25/techn ology /faceb ook-ad-libra ry.htm.
- Rousseau, D. M., Sitkin, S. B., Burt, R. S., & Camerer, C. (1998). Not so different after all: A cross- discipline view of trust. Academy of Management Review, 23(3), 393-404.
- Santa Clara Principles. (2018). Santa Clara principles on transparency and accountability in content mod- eration. Retrieved November 11, 2020, from https ://newam erica dotor g.s3.amazo naws.com/docum ents/Santa _Clara _Princ iples .pdf.
- Schermer, B. W. (2011). The limits of privacy in automated profiling and data mining. Computer Law & Security Review, 27(1), 45-52.
- Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2), 2053951717738104.
- Seaver, N. (2019). Knowing algorithms. In J. Vertesi & D. Ribes (Eds.), Digital STS: A field guide for sci- ence and technology studies (pp. 412-422). PrincetonUniversity Press.
- Selbst, A. D., & Powles, J. (2017). Meaningful information and the right to explanation. International Data Privacy Law, 7(4), 233-242.
- Siles, I., Segura-Castillo, A., Solís, R., & Sancho, M. (2020). Folk theories of algorithmic recommenda- tions on Spotify: Enacting data assemblages in the global South. Big Data & Society, 7(1), 1-15.
- Singh, S. (2019). Everything in moderation: An analysis of how Internet platforms are using artificial intelligence to moderate user-generated content. New America, 22 July 2019. Retrieved October 20, 2020, from https ://www.newam erica .org/oti/repor ts/every thing -moder ation -analy sis-how-inter net-platf orms-are-using -artifi cial -intel ligen ce-moder ate-user-gener ated-conte nt/.
- Striphas, T. (2015). Algorithmic culture. European Journal of Cultural Studies, 18(4-5), 395-412.
- Sunstein, C. S. (2018). Output transparency vs. input transparency. In D. E. Pozen & M. Schudson (Eds.). Troubling transparency: The history and future of freedom of information (Chapter 9). New York: Columbia University Press. Retrieved November 11, 2020, from https ://paper s.ssrn.com/sol3/paper s.cfm?abstr act_id=28260 09.
- Suzor, N. P., West, S. M., Quodling, A., & York, J. (2019). What do we mean when we talk about trans- parency? Toward meaningful transparency in commercial content moderation. International Jour- nal of Communication, 13, 1526-1543.
- Tamò-Larrieux, A. (2018). Designing for privacy and its legal framework. Cham: Springer.
- Tielenburg, D. S. (2018). The 'dark sides' of transparency: Rethinking information disclosure as a social praxis. Master's thesis, Utrecht University. Retrieved October 20, 2020, from https ://dspac e.libra ry.uu.nl/handl e/1874/36952 1.
- Tsoukas, H. (1997). The tyranny of light: The temptations and the paradoxes of the information society. Futures, 29(9), 827-843.
- Tutt, A. (2017). An FDA for algorithms. Administrative Law Review, 69(1), 83-123.
- Van Otterlo, M. (2013). A machine learning view on profiling. In M. Hildebrandt & K. de Vries (Eds.), Privacy, due process and the computational turn: Philosophers of law meet philosophers of tech- nology (pp. 41-64). London: Routledge.
- Van Wynsberghe, A. (2013). Designing robots for care: Care centered value-sensitive design. Science and Engineering Ethics, 19(2), 407-433.
- Veale, M., Van Kleek, M., & Binns, R. (2018). Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In Proceedings of the 2018 CHI conference on human factors in computing systems (paper 440). New York: ACM.
- Wachter, S., & Mittelstadt, B. (2019). A right to reasonable inferences: Re-thinking data protection law in the age of big data and AI. Columbia Business Law Review, 7(2), 494-620.
- Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision- making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76-99.
- Weller, A. (2017). Challenges for transparency. arXiv preprint arXiv :1708.01870 .
- Wieringa, M. (2020). What to account for when accounting for algorithms. A systematic literature review on algorithmic accountability. In FAT* '20: Proceedings of the 2020 conference on fair- ness, accountability, and transparency, January 2020 (pp. 1-18). https ://doi.org/10.1145/33510 95.33728 33
- Williams, C. C. (2005). Trust diffusion: The effect of interpersonal trust on structure, function, and organ- izational transparency. Business and Society, 44(3), 357-368.
- Zarsky, T. Z. (2013). Transparent predictions. University of Illinois Law Review, 4, 1503-1570.
- Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2019). Transparency in algorithmic and human deci- sion-making: Is there a double standard? Philosophy & Technology, 32(4), 661-683.