Ethical and Regulatory Developments in AI Governance
2023
Sign up for access to the world's latest research
Abstract
As artificial intelligence (AI) advances, so do concerns about bias, misuse, and larger societal impacts. Governments, international bodies, and the private sector are increasingly recognizing the urgency to regulate AI, leading to proposed frameworks such as the European Union's AI Act [1] and the U.S. Blueprint for an AI Bill of Rights [2]. This paper explores three critical angles in AI governance: (1) the ethics and legality of using large-scale training data without explicit permission, (2) the debate around industry self-regulation versus external oversight, and (3) inherent biases and fairness issues in AI data and systems. By examining these dimensions, we highlight both opportunities and pitfalls in the emerging regulatory landscape.
Related papers
International Scientific Journal for Research, 2023
Data ethics in AI plays a crucial role in ensuring responsible and trustworthy applications of machine learning (ML) and data governance. With the rise of AI technologies, ethical concerns such as bias, fairness, privacy, transparency, and accountability are becoming more significant. This paper explores the ethical challenges in machine learning and data governance, focusing on how they impact the development and deployment of AI systems. It examines the importance of establishing robust frameworks for responsible data science, emphasizing the need for comprehensive policies to manage ethical risks, data protection, and the socio-technical implications of AI. Additionally, it discusses potential solutions and best practices to address these challenges while fostering innovation and societal benefits through AI.
Law, Innovation and Technology (forthcoming)
In response to recent regulatory initiatives at the EU level, this article shows that training data for AI do not only play a key role in the development of AI applications, but are currently only inadequately captured by EU law. In this, I focus on three central risks of AI training data: risks of data quality, discrimination and innovation. Existing EU law, with the new copyright exception for text and data mining, only addresses a part of this risk profile adequately. Therefore, the article develops the foundations for a discrimination-sensitive quality regime for data sets and AI training, which emancipates itself from the controversial question of the applicability of data protection law to AI training data. Furthermore, it spells out concrete guidelines for the re-use of personal data for AI training purposes under the GDPR. Ultimately, the legislative and interpretive task rests in striking an appropriate balance between individual protection and the promotion of innovation. The article finishes with an assessment of the proposal for an Artificial Intelligence Act in this respect.
Expert Systems, 2023
A number of articles are increasingly raising awareness on the different uses of artificial intelligence (AI) technologies for customers and businesses. Many authors discuss about their benefits and possible challenges. However, for the time being, there is still limited research focused on AI principles and regulatory guidelines for the developers of expert systems like machine learning (ML) and/or deep learning (DL) technologies. This research addresses this knowledge gap in the academic literature. The objectives of this contribution are threefold: (i) It describes AI governance frameworks that were put forward by technology conglomerates, policy makers and by intergovernmental organizations, (ii) It sheds light on the extant literature on “AI governance” as well as on the intersection of “AI” and “corporate social responsibility” (CSR), (iii) It identifies key dimensions of AI governance, and elaborates about the promotion of accountability and transparency; explainability, interpretability and reproducibility; fairness and inclusiveness; privacy and safety of end users, as well as on the prevention of risks and of cyber security issues from AI systems. This research implies that all those who are involved in the research, development and maintenance of AI systems, have social and ethical responsibilities to bear toward their consumers as well as to other stakeholders in society.
Technology and Regulation, 2019
This article offers a brief overview of some of the ethical challenges raised by artificial intelligence (AI), in particular machine learning and data science, and summarizes and discusses a number of challenges for near-future regulation in this area. This includes the difficulties of moving from principles to more concrete measures and problems with implementing ethics by design and responsible innovation.
IJRASET, 2021
In this article we discuss ways AI can be fruitful and inimical at the same time and consider hurdles in implementing ethics and governance of AI. We conclude with presenting solutions to overcome this issue. Artificial intelligence (AI) is a technology that allows a computer system to mimic the human mind. AI, like humans, is capable of learning and developing itself through doing tasks such as planning, organizing, and executing numerous activities. However, as we develop and expand our understanding of AI, there are a few advantages and downsides that should be addressed. Privacy and security are vital, but they conflict with the advancement of AI technology since computers and AI require a large quantity of data to Comprehend and anticipate outcomes. With the advancement of technology, we should be able to maximize security and eliminate the current drawbacks.
The proliferation of Artificial Intelligence (AI) technologies in various sectors has raised profound ethical concerns regarding bias, privacy, and accountability. This paper examines these ethical implications, explores current regulatory and ethical frameworks, and proposes strategies to foster responsible AI development. Through a comprehensive literature review and analysis of case studies, the study identifies critical ethical challenges and emphasizes the importance of proactive ethical guidelines to mitigate risks. The findings underscore the need for interdisciplinary collaboration and regulatory oversight to ensure AI innovations are ethically sound and beneficial to society.
International Journal of Innovative Research in Computer Science and Technology (IJIRCST), 2025
The emergence of new technologies similar as Artificial Intelligence[AI] has the implicit to disrupt sectors, diligence, governance, and day- to- day life conditioning. Rather than easing societal progress, the advancement of AI comes with increased innovative openings, challenges, and efficiencies to be employed. This systematic research is dedicated to the profound AI ethical dilemmas that touch every and, human values across society. This paper systematically reviews possible ethical dilemmas caused during the development and/or deployment of AI with respect to bias and fairness, transparency and explainability, accountability and responsibility, privacy and surveillance, and human autonomy. Using algorithm-based systems comes with the danger of “black box” opaque decision making. These gaps are particularly problematic in autonomous systems where the traditional concepts of legal and moral liability become incredibly difficult to assign which includes critical domains like healthcare and transportation. This paper analyzes the OECD [Organisation for Economic Co-operation and Development] AI Principles, EU (European Union) Ethics guidelines for trustworthy AI and also the uses of GDPR where AI Ethic Governance models of Right to Explanations.It critically analyzes the strengths and limitations of these approaches, relating significant gaps in perpetration, enforcement, and global adjustment. The exploration highlights the pressure between invention and regulation, demonstrating how current tone-nonsupervisory measures frequently fall suddenly of addressing systemic pitfalls.
Since 2016, more than 80 AI ethics documents – including codes, principles, frameworks, and policy strategies – have been produced by corporations, governments, and NGOs. In this paper, we examine three topics of importance related to our ongoing empirical study of ethics and policy issues in these emerging documents. First, we review possible challenges associated with the relative homogeneity of the documents’ creators. Second, we provide a novel typology of motivations to characterize both obvious and less obvious goals of the documents. Third, we discuss the varied impacts these documents may have on the AI governance landscape, including what factors are relevant to assessing whether a given document is likely to be successful in achieving its goals.
Ethics: Scientific Research, Ethical Issues, Artificial Intelligence, 2023
Artificial Intelligence (AI) equips machines with the capacity to learn. AI frameworks employing machine learning can discern patterns within vast data sets and construct intricate, interconnected systems that yield results that enhance the effectiveness of decision-making processes. AI, in particular machine learning, has been positioned as an important element in contributing to as well as providing decisions in a multitude of industries. The use of machine learning in delivering decisions is based on the data that is used to train the machine learning algorithms. It is imperative that when machine learning applications are being considered that the data being used to train the machine learning algorithms are without bias, and the data is ethically used. This chapter focuses on the ethical use of data in developing machine learning algorithms. Specifically, this chapter will include the examination of AI bias and ethical use of AI, data ethics principles, selecting ethical data for AI applications, AI and data governance, and putting ethical AI applications into practice.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (6)
- European Commission. (2022). Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (AI Act). EU Publications. [Updated through 2022].
- The White House Office of Science and Technology Policy. (2022). Blueprint for an AI Bill of Rights. Retrieved from https://www.whitehouse.gov/ostp/.
- OpenAI. (2023, March). GPT-4 Technical Report. Retrieved from https://openai. com/research/gpt-4.
- Roose, K. (2023, March 15). OpenAI's GPT-4: What We Know So Far. The New York Times.
- Metz, C. (2023, April 11). Inside ChatGPT's Secret Data. The New York Times.
- Smith, J. (2023, May 2). Data Scraping Concerns Rise with AI Boom. The Wall Street Journal.