AI -GOVERNANCE AND CONTROL
Sign up for access to the world's latest research
Abstract
This dialogue explores the fundamental distinction between governance and control in complex adaptive systems, using Norbert Wiener's cybernetic framework as a conceptual foundation. Beginning with an examination of Wiener's perspectives on the machine age, the conversation evolves to investigate how governance—characterized by establishing frameworks, principles, and boundaries that guide behavior without dictating every action—offers a more nuanced approach than direct control for managing sophisticated AI systems. The discussion then deepens through biological analogies, particularly examining DNA not as a deterministic blueprint but as a multi-dimensional governance framework that enables both stability and adaptation. This biological lens reveals how genetic regulatory networks balance constraint with flexibility, suggesting that effective AI governance similarly requires multi-level regulatory mechanisms that can accommodate emergence and adaptation while maintaining alignment with human values. The resulting theoretical framework transcends traditional control paradigms, proposing instead that AI development should incorporate principles of evolutionary adaptation within carefully designed governance boundaries, enabling beneficial innovation while mitigating existential risks.
Related papers
AI-based systems are "black boxes," resulting in massive information asymmetries between the developers of such systems and consumers and policymakers. In order to bridge this information gap, this article proposes a conceptual framework for thinking about governance for AI. M any sectors of society rapidly adopt digital technologies and big data, resulting in the quiet and often seamless integration of AI, autonomous systems, and algorith-mic decision-making into billions of human lives. 1,2 AI and algorithmic systems already guide a vast array of decisions in both private and public sectors. For example, private global platforms, such as Google and Facebook, use AI-based filtering algorithms to control access to information. AI algorithms that control self-driving cars must decide on how to weigh the safety of passengers and pedestrians. 3 Various applications, including security and safety decision-making systems, rely heavily on AI-based face recognition algorithms. And a recent study from Stanford University describes an AI algorithm that can deduce the sexuality of people on a dating site with up to 91 percent accuracy. 4 Voicing alarm at the capabilities of AI evidenced within this study, and as AI technologies move toward broader adoption, some voices in society have expressed concern about the unintended consequences and potential downsides of widespread use of these technologies. To ensure transparency, accountability, and explainability for the AI ecosystem, our governments , civil society, the private sector, and academia must be at the table to discuss gover-nance mechanisms that minimize the risks and possible downsides of AI and autonomous systems while harnessing the full potential of this technology. 5 Yet the process of designing a gov-ernance ecosystem for AI, autonomous systems, and algorithms is complex for several reasons. As researchers at the University of Oxford point out, 3 separate regulation solutions for decision-making algorithms, AI, and robot-ics could misinterpret legal and ethical challenges as unrelated, which is no longer accurate in today's systems. Algorithms, hardware, software, and data are always part of AI and autonomous systems. To regulate ahead of time is difficult for any kind of industry. Although AI technologies are evolving rapidly, they are still in the development stages. A global AI governance system must be flexible enough to accommodate cultural differences and bridge gaps across different national legal systems. While there are many approaches we can take to design a governance structure for AI, one option is to take inspiration from the development and evolution of governance structures that act on the Internet environment. Thus, here we discuss different issues associated with gov-ernance of AI systems, and introduce a conceptual framework for thinking about governance for AI, autonomous systems, and algorithmic decision-making processes. The Nature of AI Although AI-based applications are increasingly adopted in hospitals, courtrooms, schools, at home, and on the road to support (and in some instances, even guide) human decision-making,
Proceedings of the 2018 Conference on Artificial Life (ALIFE 2018), 2018
The influence of Artificial Intelligence (AI) and Artificial Life (ALife) technologies upon society, and their potential to fundamentally shape the future evolution of humankind, are topics very much at the forefront of current scientific, governmental and public debate. While these might seem like very modern concerns, they have a long history that is often disregarded in contemporary discourse. Insofar as current debates do acknowledge the history of these ideas, they rarely look back further than the origin of the modern digital computer age in the 1940s–50s. In this paper we explore the earlier history of these concepts. We focus in particular on the idea of self-reproducing and evolving machines, and potential implications for our own species. We show that discussion of these topics arose in the 1860s, within a decade of the publication of Darwin's The Origin of Species, and attracted increasing interest from scientists, novelists and the general public in the early 1900s. After introducing the relevant work from this period, we categorise the various visions presented by these authors of the future implications of evolving machines for humanity. We suggest that current debates on the co-evolution of society and technology can be enriched by a proper appreciation of the long history of the ideas involved.
Jerome De Cooman, "From the Regulation of Artificial Intelligence by Society to the Regulation of Society by Artificial Intelligence: All Along the Watchtower", in H. Jacquemin (ed.), Time to Reshape the Digital Society, Brussels, Larcier, 2021
This paper (preprint author) discusses the bidimensional definition of regulation by design, i.e., any alteration of human or technological behaviour through algorithmic code or data (p. 448). It argues that cyberspace made possible the advent of the digital society. If code is the new regulation, coders are the new regulators. Questions nevertheless remain as to how law can ensure appropriate regulation by design mechanisms in the AI Age. His conceptual paper identifies these questions and proposes two solution – a procedural and a substantive – that would render by-design regulation through AI more fundamental righs-proof.
OSF (Open Science Framework) Preprints
This article explores the hypothesis that intelligence, rather than humanity, drives evolution. As artificial intelligence systems become increasingly sophisticated, we must reconsider traditional evolutionary frameworks that have historically been centered on human-centric perspectives. Drawing parallels between genetic evolution and the development of artificial intelligence, we examine how intelligence has evolved through biological processes and is now manifesting in artificial systems. This evolution raises profound questions about humanity's role in guiding this new form of intelligence and the ethical implications of its development.
Systems Research and Behavioral Science, 2004
This contribution is based on the Ludwig von Bertalanffy Lecture delivered by the author at the 47th Conference of the International Society for the System Sciences (ISSS) in Crete, 7 July 2003. The conference was organized around the issue ‘Conscious Evolution of Humanity: Using Systems Thinking to Construct Agoras of the Global Village’. This article explores the potential and actual contributions of cybernetics to organizational and societal evolution. The focus is on the models and conceptual tools of managerial cybernetics. When properly used these can become powerful pivots of an ‘evolution by design’, as opposed to an evolution at the mercy of mere chance. Copyright © 2004 John Wiley & Sons, Ltd.
Humanities and Social sciences Communications, 2024
The rapid advancement and deployment of Artificial Intelligence (AI) poses significant regulatory challenges for societies. While it has the potential to bring many benefits, the risks of commercial exploitation or unknown technological dangers have led many jurisdictions to seek a legal response before measurable harm occurs. However, the lack of technical capabilities to regulate this sector despite the urgency to do so resulted in regulatory inertia. Given the borderless nature of this issue, an internationally coordinated response is necessary. This article focuses on the theoretical framework being established in relation to the development of international law applicable to AI and the regulatory authority to create and monitor enforcement of said law. The authors argue that the road ahead remains full of obstacles that must be tackled before the above-mentioned elements see the light despite the attempts being made currently to that end.
Journal of Ecohumanism, 2024
Research on AI governance is important towards potentially useful and constraining affordable misuse, reduce new risks and economic trends that threaten to disrupt public political and economic trends, and drive off target as interest in advanced AI systems and the norms, focal points, and use of new AI research are potentially transformative and governance institutions aim to prevent. Potential public benefits from policy community re-using AI research are enormous, including reduced economic instability. A fundamental challenge in AI governance is a cognitive framing challenge: governing AI research requires understanding new kinds of safety risks, performance goals, and intended applications that advanced AI systems will make possible. Specifically, the letter focuses on how AI research could mitigate issues such as the possibility of AI capabilities getting concentrated within a small and hard-to-regulate group of actors, and ultimately recommends the prioritization of open research and collaboration, with concern for long-term social and economic looming to the forefront of coalitions if AI becomes an increasingly important aspect of the future economy and society.
Social Science Research Network, 2019
2020
This presentation discusses a notion encountered across disciplines, and in different facets of human activity: autonomous activity. We engage it in an interdisciplinary way. We start by considering the reactions and behaviors of biological entities to biotechnological intervention. An attempt is made to characterize the degree of freedom of embryos & clones, which show openness to different outcomes when the epigenetic developmental landscape is factored in. We then consider the claim made in programming and artificial intelligence that automata could show self-directed behavior as to the determination of their step-wise decisions on courses of action. This question remains largely open and calls for some important qualifications. We try to make sense of the presence of claims of freedom in agency, first in common sense, then by ascribing developmental plasticity in biology and biotechnology, and in the mapping of programmed systems in the presence of environmental cues and self-re...
Policy and Society, 2025
The rapid and widespread diffusion of generative artificial intelligence (AI) has unlocked new capabilities and changed how content and services are created, shared, and consumed. This special issue builds on the 2021 Policy and Society special issue on the governance of AI by focusing on the legal, organizational, political, regulatory, and social challenges of governing generative AI. This introductory article lays the foundation for understanding generative AI and underscores its key risks, including hallucination, jailbreaking, data training and validation issues, sensitive information leakage, opacity, control challenges, and design and implementation risks. It then examines the governance challenges of generative AI, such as data governance, intellectual property concerns, bias amplification, privacy violations, misinformation, fraud, societal impacts, power imbalances, limited public engagement, public sector challenges, and the need for international cooperation. The article then highlights a comprehensive framework to govern generative AI, emphasizing the need for adaptive, participatory, and proactive approaches. The articles in this special issue stress the urgency of developing innovative and inclusive approaches to ensure that generative AI development is aligned with societal values. They explore the need for adaptation of data governance and intellectual property laws, propose a complexity-based approach for responsible governance, analyze how the dominance of Big Tech is exacerbated by generative AI developments and how this affects policy processes, highlight the shortcomings of technocratic governance and the need for broader stakeholder participation, propose new regulatory frameworks informed by AI safety research and learning from other industries, and highlight the societal impacts of generative AI.