Papers by Luciano Floridi

This paper examines the debate on AI legal personhood, emphasizing the role of path dependencies ... more This paper examines the debate on AI legal personhood, emphasizing the role of path dependencies in shaping current trajectories and prospects. Three primary path dependencies emerge: prevailing legal theories on personhood (singularist vs. clustered), the actual participation of AI in socio-digital institutions (instrumental vs. noninstrumental), and the impact of technological advancements. We argue that these factors dynamically interact, with technological optimism fostering broader attribution of the legal entitlements to AI entities and periods of scepticism narrowing such entitlements. Additional influences include regulatory cross-linkages (e.g., data privacy, liability, cybersecurity) and historical legal precedents. Current regulatory frameworks, particularly in the EU, generally resist extending legal personhood to AI systems. Case law suggests that without explicit legislation, courts are unlikely to grant AI legal personhood on their own, although some authors suggest that the courts can do so. For this to happen, AI systems would first need to prove de facto legitimacy through sustained participation within socio-digital institutions. The chapter concludes by assessing nearand long-term prospects for legal personification, from generative AI and AI agents in the next 5-20 years to transformative possibilities such as AI integration with human cognition via Brain-Machine Interfaces in a more distant future.

Artificial intelligence's impact on healthcare is undeniable. What is less clear is whether it wi... more Artificial intelligence's impact on healthcare is undeniable. What is less clear is whether it will be ethically justifiable. Just as we know that AI can be used to diagnose disease, predict risk, develop personalized treatment plans, monitor patients remotely, or automate triage, we also know that it can pose significant threats to patient safety and the reliability (or trustworthiness) of the healthcare sector as a whole. These ethical risks arise from (a) flaws in the evidence base of healthcare AI (epistemic concerns); (b) the potential of AI to transform fundamentally the meaning of health, the nature of healthcare, and the practice of medicine (normative concerns); and (c) the 'black box' nature of the AI development pipeline, which undermines the effectiveness of existing accountability mechanisms (traceability concerns). In this chapter, we systematically map (a)-(c) to six different levels of abstraction: individual, interpersonal, group, institutional, sectoral, and societal. The aim is to help policymakers, regulators, and other high-level stakeholders delineate the scope of regulation and other 'softer' Governing measures for AI in healthcare. We hope that by doing so, we may enable global healthcare systems to capitalize safely and reliably on the many life-saving and improving benefits of healthcare AI.

Wikipedia is an essential source of information online, so efforts to combat misinformation on th... more Wikipedia is an essential source of information online, so efforts to combat misinformation on this platform are critical to the health of the information ecosystem. However, few studies have comprehensively examined misinformation dynamics within Wikipedia. We address this gap by investigating Wikipedia editing communities during the 2024 US Presidential Elections, focusing on the dynamics of misinformation. We assess the effectiveness of Wikipedia's existing measures against misinformation dissemination over time, using a combination of quantitative and qualitative methods to study edits posted on politicians' pages. We find that the volume of Wikipedia edits and the risk of misinformation increase significantly during politically charged moments. We also find that a significant portion of misinformation is detected by existing editing mechanisms, particularly overt cases such as factual inaccuracies and vandalism. Based on this assessment, we conclude by offering some recommendations for addressing misinformation within Wikipedia's editing ecosystem.

Philosophy & Technology
This article argues that the current hype surrounding artificial intelligence (AI) exhibits chara... more This article argues that the current hype surrounding artificial intelligence (AI) exhibits characteristics of a tech bubble, based on parallels with five previous technological bubbles: the Dot-Com Bubble, the Telecom Bubble, the Chinese Tech Bubble, the Cryptocurrency Boom, and the Tech Stock Bubble. The AI hype cycle shares with them some essential features, including the presence of potentially disruptive technology, speculation outpacing reality, the emergence of new valuation paradigms, significant retail investor participation, and a lack of adequate regulation. The article also highlights other specific similarities, such as the proliferation of AI startups, inflated valuations, and the ethical concerns associated with the technology. While acknowledging AI's transformative potential, the article calls for pragmatic caution, evidence-based planning, and critical thinking in approaching the current hype. It concludes by offering some recommendations to minimise the negative impact of the impending bubble burst, emphasising the importance of focusing on sustainable business models and real-world applications, maintaining a balanced perspective on AI's potential and limitations, and supporting the development of effective regulatory frameworks to guide the technology's design, development, and deployment.
The recent success of Generative AI (GenAI) has heralded a new era in content creation, dissemina... more The recent success of Generative AI (GenAI) has heralded a new era in content creation, dissemination, and consumption. This technological revolution is reshaping our understanding of content, challenging traditional notions of authorship, and transforming the relationship between content producers and consumers. As we approach an increasingly AI-integrated world, examining the implications of this paradigm shift is crucial. This article explores the future of content in the age of GenAI, analysing the evolving definition of content, the transformations brought about by GenAI systems, and emerging models of content production and dissemination. By examining these aspects, we can gain valuable insights into the challenges and opportunities that lie ahead in the realm of content creation and consumption and, hopefully, manage them more successfully.

The US Government has stated its desire for the US to be the home of the world's most advanced Ar... more The US Government has stated its desire for the US to be the home of the world's most advanced Artificial Intelligence (AI). Arguably, it currently is. However, a limitation looms large on the horizon as the energy demands of advanced AI look set to outstrip both current energy production and transmission capacity. Although algorithmic and hardware efficiency will improve, such progress is unlikely to keep up with the exponential growth in compute power needed in modern AI systems. Furthermore, even with sufficient gains in energy efficiency, overall use is still expected to increase in a contemporary Jevons paradox. All these factors set the US AI ambition, alongside broader electrification, on a crash course with the US government's ambitious clean energy targets. Something will likely have to give. For now, it seems that the dilemma is leading to a de-prioritization of AI compute allocated to safety-related projects alongside a slowing of the pace of transition to renewable energy sources. Worryingly, the dilemma does not appear to be considered a risk of AI, and its resolution does not have clear ownership in the US Government.

Background: There are more than 350,000 health apps available in public app stores. The extolled ... more Background: There are more than 350,000 health apps available in public app stores. The extolled benefits of health apps are numerous and well documented. However, there are also concerns that poor-quality apps, marketed directly to consumers, threaten the tenets of evidence-based medicine and expose individuals to the risk of harm. This study addresses this issue by assessing the overall quality of evidence publicly available to support the effectiveness claims of health apps marketed directly to consumers.
Methodology: To assess the quality of evidence available to the public to support the effectiveness claims of health apps marketed directly to consumers, an audit was conducted of a purposive sample of apps available on the Apple App Store.
Results: We find the quality of evidence available to support the effectiveness claims of health apps marketed directly to consumers to be poor. Less than half of the 220 apps (44%) we audited state that they have evidence to support their claims of effectiveness and, of these allegedly evidence-based apps, more than 70% rely on either very low or low-quality evidence. For the minority of app developers that do publish studies, significant methodological limitations are commonplace. Finally, there is a pronounced tendency for apps – particularly mental health and diagnostic apps – to either borrow evidence generated in other (typically offline) contexts or to rely exclusively on unsubstantiated, unpublished user metrics as evidence to support their effectiveness claims.
Conclusions: Health apps represent a significant opportunity for individual consumers and healthcare systems. Nevertheless, this opportunity will be missed if the health apps market continues to be flooded by poor quality, poorly evidenced, and potentially unsafe apps. It must be accepted that a continuing lag in generating high-quality evidence of app effectiveness and safety is not inevitable: it is a choice. Just because it will be challenging to raise the quality of the evidence base available to support the claims of health apps, this does not mean that the bar for evidence quality should be lowered. Innovation for innovation’s sake must not be prioritized over public health and safety.

Artificial Intelligence and computer games have been closely related since the first single-playe... more Artificial Intelligence and computer games have been closely related since the first single-player games were made. From AI-powered companions and foes to procedurally generated environments, the history of digital games runs parallel to the history of AI. However, recent advances in language models have made possible the creation of conversational AI agents that can converse with human players in natural language, interact with a game's world in their own right and integrate these capabilities by adjusting their actions according to communications and vice versa. This creates the potential for a significant shift in games' ability to simulate a highly complex environment, inhabited by a variety of AI agents with which human players can interact just as they would interact with the digital avatar of another person. This article begins by introducing the concept of conversational AI agents and justifying their technical feasibility. We build on this by introducing a taxonomy of conversational AI agents in multiplayer games, describing their potential uses and, for each use category, discussing the associated opportunities and risks. We then explore the implications of the increased flexibility and autonomy that such agents introduce to games, covering how they will change the nature of games and in-game advertising, as well as their interoperability across games and other platforms. Finally, we suggest game worlds filled with human and conversational AI agents can serve as a microcosm of the real world.
Over the last decade the figure of the AI Ethicist has seen significant growth in the ICT market.... more Over the last decade the figure of the AI Ethicist has seen significant growth in the ICT market. However, only a few studies have taken an interest in this professional profile, and they have yet to provide a normative discussion of its expertise and skills. The goal of this article is to initiate such discussion. We argue that AI Ethicists should be experts and use a heuristic to identify them. Then, we focus on their specific kind of moral expertise, drawing on a parallel with the expertise of Ethics Consultants in clinical settings and on the bioethics literature on the topic. Finally, we highlight the differences between Health Care Ethics Consultants and AI Ethicists and derive the expertise and skills of the latter from the roles that AI Ethicists should have in an organisation.

U.S.-China AI competition has created a 'race to the bottom', where each nation's attempts to cut... more U.S.-China AI competition has created a 'race to the bottom', where each nation's attempts to cut each other off artificial intelligence (AI) computing resources through protectionist policies comes at a cost-greater energy consumption. This article shows that heightened energy consumption stems from six key areas: 1) Limited access to the latest and most energy-efficient hardware; 2) Unintended spillover effects in the consumer space due to the dual-use nature of AI technology and processes; 3) Duplication in manufacturing processes, particularly in areas lacking comparative advantage; 4) The loosening of environmental standards to onshore manufacturing; 5) The potential for weaponizing the renewable energy supply chain, which supports AI infrastructure, hindering the pace of the renewable energy transition; 6) The loss of synergy in AI advancement, including the development of more energy-efficient algorithms and hardware, due to the transition towards a more autarkic information system and trade. By investigating the unintended consequences of the U.S.-China AI competition policies, the article highlights the need to redesign AI competition to reduce unintended consequences on the environment, consumers, and other countries.

This article addresses the question of how ‘Country of Origin Information’ (COI) reports — that i... more This article addresses the question of how ‘Country of Origin Information’ (COI) reports — that is, research developed and used to support decision-making in the asylum process — can be published in an ethical manner. The article focuses on the risk that published COI reports could be misused and thereby harm the subjects of the reports and/or those involved in their development. It supports a situational approach to assessing data ethics when publishing COI reports, whereby COI service providers must weigh up the benefits and harms of publication based, inter alia, on the foreseeability and probability of harm due to potential misuse of the research, the public good nature of the research, and the need to balance the rights and duties of the various actors in the asylum process, including asylum seekers themselves. Although this article focuses on the specific question of ‘how to publish COI reports in an ethical manner’, it also intends to promote further research on data ethics in the asylum process, particularly in relation to refugees, where more foundational issues should be considered.

As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (... more As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of AI ethics principles and the practical design of AI systems. In previous work , we analysed whether it is possible to close this gap between the 'what' and the 'how' of AI ethics through the use of tools and methods designed to help AI developers, engineers, and designers translate principles into practice. We concluded that this method of closure is currently ineffective as almost all existing translational tools and methods are either too flexible (and thus vulnerable to ethics washing) or too strict (unresponsive to context). This raised the question: if, even with technical guidance, AI ethics is challenging to embed in the process of algorithmic design, is the entire pro-ethical design endeavour rendered futile? And, if no, then how can AI ethics be made useful for AI practitioners? This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited, and how these limitations can be potentially overcome by providing theoretical grounding of a concept that has been termed 'Ethics as a Service'

In this article we analyse the role that artificial intelligence (AI) could play, and is playing,... more In this article we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI's greenhouse gas (GHG) emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based, and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combatting climate change, while reducing its impact on the environment.
In this article, we compare the artificial intelligence strategies of China and the European Unio... more In this article, we compare the artificial intelligence strategies of China and the European Union, assessing the key similarities and differences regarding what the high-level aims of each governance strategy are, how the development and use of AI is promoted in the public and private sectors, and whom these policies are meant to benefit. We characterise China’s strategy by its primary focus on fostering innovation and a more recent emphasis on “common prosperity”, and the EU’s on promoting ethical outcomes through protecting fundamental rights. Building on this comparative analysis, we consider the areas where the EU and China could learn from and improve upon each other’s approaches to AI governance to promote more ethical outcomes. We outline policy recommendations for both European and Chinese policymakers that would support them in achieving this aim.
Technologies to rapidly alert people when they have been in contact with someone carrying the cor... more Technologies to rapidly alert people when they have been in contact with someone carrying the coronavirus SARS-CoV-2 are part of a strategy to bring the pandemic under control. Currently, at least 47 contact-tracing apps are available globally. They are already in use in Australia, South Korea and Singapore, for instance. And many other governments are testing or considering them. Here we set out 16 questions to assess whether — and to what extent — a contact-tracing app is ethically justifiable.

Health-care systems worldwide face increasing demand, a rise in chronic disease, and resource con... more Health-care systems worldwide face increasing demand, a rise in chronic disease, and resource constraints. At the same time, the use of digital health technologies in all care settings has led to an expansion of data. For this reason, policy makers, politicians, clinical entrepreneurs, and computer and data scientists argue that a key part of health-care solutions will be artificial Intelligence (AI), particularly machine learning AI forms a key part of the National Health Service (NHS) Long-Term Plan (2019) in England, the US National Institutes of Health Strategic Plan for Data Science (2018), and China’s Healthy China 2030 strategy (2016). The willingness to embrace the potential future of medical care, expressed in these national strategies, is a positive development. Health-care providers should, however, be mindful of the risks that arise from AI’s ability to change the intrinsic nature of how health care is delivered. This paper outlines and discusses these potential risks.

Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater atten... more Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defense strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a global scale. However, trust in AI (both machine learning and neural networks) to deliver cybersecurity tasks is a double-edged sword: it can improve substantially cybersecurity practices, but can also facilitate new forms of attacks to the AI applications themselves, which may pose severe security threats. We argue that trust in AI for cybersecurity is unwarranted and that, to reduce security risks, some form of control to ensure the deployment of ‘reliable AI’ for cybersecurity is necessary. To this end, we offer three recommendations focusing on the design, development and deployment of AI for cybersecurity.

The article develops a correctness theory of truth (CTT) for semantic information. After the intr... more The article develops a correctness theory of truth (CTT) for semantic information. After the introduction, in section two, semantic information is shown to be translatable into propositional semantic information (i). In section three, i is polarised into a query (Q) and a result (R), qualified by a specific context, a level of abstraction and a purpose. This polarization is normalised in section four, where [Q + R] is transformed into a Boolean question and its relative yes/no answer [Q + A]. This completes the reduction of the truth of i to the correctness of A. In sections five and six, it is argued that (1) A is the correct answer to Q if and only if (2) A correctly saturates (in a Fregean sense) Q by verifying and validating it (in the computer science’s sense of “verification” and “validation”); that (2) is the case if and only if (3) [Q + A] generates an adequate model (m) of the relevant system (s) identified by Q; that (3) is the case if and only if (4) m is a proxy of s (in the computer science’s sense of “proxy”) and (5) proximal access to m commutes with the distal access to s (in the category theory’s sense of “commutation”); and that (5) is the case if and only if (6) reading/writing (accessing, in the computer science’s technical sense of the term) m enables one to read/write (access) s. The last section draws a general conclusion about the nature of CTT as a theory for systems designers not just systems users.
The paper introduces a new model of telepresence. First, it criticises the standard model of pres... more The paper introduces a new model of telepresence. First, it criticises the standard model of presence as epistemic failure, showing it to be inadequate. It then replaces it with a new model of presence as successful observability. It further provides reasons to distinguish between two types of presence, backward and forward. The new model is then tested against two ethical issues whose nature has been modified by the development of digital information and communication technologies, namely pornography and privacy, and shown to be effective.

The Copernican revolution displaced us from the center of the universe. The Darwinian revolution ... more The Copernican revolution displaced us from the center of the universe. The Darwinian revolution displaced us from the center of the biological kingdom. And the Freudian revolution displaced us from the center of our mental lives. Today, Computer Science and digital ICTs are causing a fourth revolution, radically changing once again our conception of who we are and our “exceptional centrality.” We are not at the center of the infosphere. We are not standalone entities, but rather interconnected informational agents, sharing with other biological agents and smart artifacts a global environment ultimately made of information. Having changed our views about ourselves and our world, are ICTs going to enable and empower us, or constrain us? This paper argues that the answer lies in an ecological and ethical approach to natural and artificial realities. It posits that we must put the “e” in an environmentalism that can deal successfully with the new issues caused by the fourth revolution.
Uploads
Papers by Luciano Floridi
Methodology: To assess the quality of evidence available to the public to support the effectiveness claims of health apps marketed directly to consumers, an audit was conducted of a purposive sample of apps available on the Apple App Store.
Results: We find the quality of evidence available to support the effectiveness claims of health apps marketed directly to consumers to be poor. Less than half of the 220 apps (44%) we audited state that they have evidence to support their claims of effectiveness and, of these allegedly evidence-based apps, more than 70% rely on either very low or low-quality evidence. For the minority of app developers that do publish studies, significant methodological limitations are commonplace. Finally, there is a pronounced tendency for apps – particularly mental health and diagnostic apps – to either borrow evidence generated in other (typically offline) contexts or to rely exclusively on unsubstantiated, unpublished user metrics as evidence to support their effectiveness claims.
Conclusions: Health apps represent a significant opportunity for individual consumers and healthcare systems. Nevertheless, this opportunity will be missed if the health apps market continues to be flooded by poor quality, poorly evidenced, and potentially unsafe apps. It must be accepted that a continuing lag in generating high-quality evidence of app effectiveness and safety is not inevitable: it is a choice. Just because it will be challenging to raise the quality of the evidence base available to support the claims of health apps, this does not mean that the bar for evidence quality should be lowered. Innovation for innovation’s sake must not be prioritized over public health and safety.