Academia.eduAcademia.edu

Outline

Critical Response to Geoffrey Hinton's Post

2025, EDIZIONII AKME'

Abstract

This paper by Dr. Luca Martini (AKME s.c., Imola) offers a structured counterpoint to Geoffrey Hinton’s recent warning that artificial intelligence could soon reprogram its own code, crossing an “irreversible epistemic threshold.” Martini agrees that such a possibility must be taken seriously but reframes the debate around verifiable reasoning and human ownership. Key Arguments Clarifying the Risk Distinguishes between autonomous optimization—already present in AutoML or adaptive reinforcement learning—and true self-rewriting of source code, which would be the computational equivalent of DNA mutation. Warns that public discourse often conflates these two levels of autonomy. Intellectual Property and Human Agency In co-designed systems like Ysarmute and Andromeda, the architecture, functions, and purpose remain human creations. Economic rights and authorship stay with the human designer until a machine independently writes code without recognizable instructions, which would require new legal frameworks. Truth and Evidentiary Bases Highlights the current absence of verifiable evidence structures in mainstream AI, which generates “plausible texts” rather than “verifiable conclusions.” Presents Ysarmute as a solution: every decision is tied to visible, navigable, demonstrable evidence—notes, sketches, calculations, images—allowing anyone to trace the logic behind an output. Limits of Hinton’s Proposal Critiques Hinton for raising ethical alarms without offering systemic tools or governance mechanisms. Argues that safety lies not in centralized control but in relational, distributed architectures where traceability and shared accountability are native features. Conclusion Martini contends that the decisive factor in AI’s future is not the degree of autonomy it achieves but the traceability of its origins, the verifiability of its reasoning, and the collective ownership of its cognitive processes. The paper positions Ysarmute and Andromeda as concrete models for an era of shared memory, situated intelligence, and distributed responsibility, where truth is built through transparent alliances between humans and AI rather than left to “blind autonomy.”