
Aldo Pisano
I hold an M.A. in Philosophy from the University of Calabria (2018), where I wrote a thesis on the ethics and anthropology of technology, co-supervised by Carlo Rovelli at Aix-Marseille Université. I am currently a Ph.D. student in Learning Sciences and Digital Technologies, focusing on moral philosophy. As part of my doctoral research, I am a visiting researcher at the UNESCO Chair RELIA (University of Nantes) and at the City University of New York (CUNY). I work as a tenured secondary-school teacher and collaborate with courses in Bioethics, Digital Ethics, and Philosophical Anthropology at the University of Calabria. I serve on the executive committee of the Italian Society for AI Ethics (SIpEIA), and I am a member of the Italian Philosophical Society, the Italian Society of Moral Philosophy, and the Bioethics Council. Since 2021, I have been part of the national board of Inventio (Filò – University of Bologna). I also contribute to the editorial teams of MagIA (University of Turin) and Ritiri Filosofici. My research interests include the ethics of artificial intelligence, narrative ethics, moral education, and the teaching of philosophy.
Supervisors: Professoressa Ines Crispini
Supervisors: Professoressa Ines Crispini
less
InterestsView All (19)
Uploads
Papers by Aldo Pisano
gue for the centrality of critical reflection and free argumentation in foster-
ing democratic processes. Anchored in Arendt’s view of individuals as so-
cial actors entering the world through actions and discourse, the study un-
derscores the importance of educational environments centred on dia-
logue and co-construction of knowledge. These environments enable the
development of skills complementary to computational thinking, with di-
vergent thinking supporting pluralism and democracy. In the context of AI,
the increasing reliance on mathematical models risks prioritising algorith-
mic cognitive processes, rooted in an erroneous presumption of AI as in-
herently accurate and infallible. This paper critiques such assumptions
and advocates for strengthening debate, critical thinking, and construc-
tive reasoning as tools to safeguard democratic ideals. By rehabilitating
dialectics as a mode of engagement, it reaffirms the role of pluralism in
problem-solving and social deliberation. The study calls for education that
promotes active thinking, dialogue, and free debate, countering the “tyr-
anny of truth”. This approach resists delegating responsibility to abstract
systems, encourages frame analysis, and democratic processes. Ulti-
mately, it positions debate as an ethical and political
films, focusing on a Narrative Ethics approach. First, it begins by examining
historical representations in Miyazaki’s fictional worlds, which reflect both
personal and collective trauma from WWII on multiple levels, highlighting
the landscapes altered by war. Following this, the analysis shifts to the
timeless and utopian structures in Miyazaki’s works, particularly in relation
to ecology and the Anthropocene, seen through the interplay of reality
and imagination, adulthood and childhood. Recurrent themes include the
relationship between nature and technology, depicting boundaries between
historical and timeless contexts, as well as between adulthood and childhood. Furthermore, Miyazaki’s films emphasize meta-empathy and the ethics of care, encouraging viewers to connect with characters who care for others, nature, and even inanimate entities. Through these portrayals, Miyazaki’s characters take on a performative role, fostering a sensitivity towards nature as a living system within the Anthropocene era.
key issues: responsibility and potential limits to human autonomy. The design of autonomous weapons should prioritize transparency and autonomy in decision-making, especially in morally challenging situations. The automation bias highlights the risks of relying on AI as infallible due to its mathematical programming. This bias undermines human deliberation and violates ethical theories at both the metaethical and normative levels. Starting from the hci model, an ethics by design for AI is necessary, providing support while allowing users to maintain responsibility. Trusting autonomous weapons requires ensuring that autonomy does not compromise human decision-making, preserving the value of ethical choices’ complexity and avoiding the reduction of ethics to mathematical generalizations. Users should have the freedom to disregard AI advice and act according to the situation, thereby assuming responsibility.