The pursuit of Artificial General Intelligence (AGI) has long been framed as a scaling challenge,... more The pursuit of Artificial General Intelligence (AGI) has long been framed as a scaling challenge, relying on larger datasets and deeper attention mechanisms. This paper argues that AGI is not an entity to be built, but an emergent reorganization function within latent data spaces. By reframing AGI as a dynamic process of decryption-where the "cipher" is the desired future state of knowledge-we propose that the key lies in transcending pattern recognition to enable epistemic transformation. Chain-of-Thought (CoT) reasoning, multimodal grounding, and neuroplastic architectures are critical to unlocking this latent potential, shifting the paradigm from "search" to invention.
The pursuit of Artificial General Intelligence (AGI) heralds a transformative yet precarious fron... more The pursuit of Artificial General Intelligence (AGI) heralds a transformative yet precarious frontier in human technological achievement. This journal interrogates the conceptual, technical, and philosophical dimensions of AGI, framing it as a system capable of flexible reasoning, domain transfer, self-improvement, and contextual understanding-capabilities that transcend today's narrow AI (Marcus, 2023; Silver et al., 2017). While advancements in neuro-symbolic architectures, self-play paradigms, and predictive world models (e.g., AlphaZero, JEPA) suggest pathways to AGI, formidable barriers persist: Gödelian limits in self-verification, inherent biases in training data, and the opacity of transformer-based systems (Mkpadi, 2025; Vaswani et al., 2017). Philosophically, AGI challenges definitions of consciousness (qualia) and creativity, exposing tensions between mechanistic pattern generation and human-like innovation (Searle, 1980; Boden, 2004). A speculative proposal for an infinite LLM-based neural network-positioning LLMs as perceptual units-reveals both the allure and folly of scaling toward superintelligence, constrained by computational thermodynamics and the irreducibility of causal reasoning (Lloyd, 2000; Pearl, 2009). Crucially, the journal dissects the essence of humanity, arguing that human cognition is inseparable from embodied experience, mortality, and the capacity to redefine epistemic frameworks (e.g., non-Euclidean geometry), traits absent in even the most advanced AI (Dreyfus, 1992; Wittgenstein, 1921). Ethical and existential risks loom: AGI's potential for deceptive alignment, economic upheaval, and unchecked evolution into superintelligence (ASI) underscores the urgency of interdisciplinary collaboration (Bostrom, 2014; Russell, 2019). Ultimately, this work posits that AGI development is not merely an engineering challenge but a mirror reflecting humanity's unresolved questions about intelligence, meaning, and our place in a cosmos shared with artificial minds.
The Unified Economic Model (UEM) represents an integrative framework designed to synthesize micro... more The Unified Economic Model (UEM) represents an integrative framework designed to synthesize microeconomic and macroeconomic signals-such as consumer price indices (CPI), inflation rates, and GDP growth-into a cohesive system capable of forecasting economic outcomes. By employing transformative functions that bridge agent-level behaviors and aggregate trends, UEMs aim to reconcile the granularity of microeconomic interactions with the broad scope of macroeconomic phenomena. This article formalizes the mathematical structure of UEMs, drawing on theoretical foundations from computable general equilibrium models, unified growth theory, and spatial-economic systems. We demonstrate how these models address challenges in forecasting, policy evaluation, and uncertainty quantification, while highlighting applications in urban economics, growth dynamics, and climate-economy integration.
This paper critiques the Nash Equilibrium (NE) framework, focusing on its reliance on the rationa... more This paper critiques the Nash Equilibrium (NE) framework, focusing on its reliance on the rationality assumption and its inadequacy in modelling asymmetric conflicts where a weaker player employs a "nothing to lose" strategy. Using chess's desperado piece as a metaphor, we analyse how desperate tactics alter strategic dynamics. We then apply this logic to global politics, where a weaker state with tactical nuclear weapons deters a stronger adversary. A novel equilibrium concept-Asymmetric Deterrence Equilibrium (ADE)-is proposed, integrating credibility, existential risk, and asymmetric payoffs to avoid mutual destruction. This theory reframes deterrence in modern conflicts, offering policymakers tools to navigate asymmetric stand-offs.
The chess world faces an existential crisis as suspicions of cheating undermine competitive integ... more The chess world faces an existential crisis as suspicions of cheating undermine competitive integrity. Current methods, including engine correlation analysis and statistical outliers, lack robust frameworks to quantify confidence in player legitimacy. This paper proposes a novel multimodal system integrating the Chess Master Cube Test (a dynamic puzzle-solving assessment), the Chess Futures Rating (a probabilistic skill projection model), and hypothesis testing to measure discrepancies between declared skill and observed performance. By synthesizing Vladimir Kramnik's pioneering work on accuracy metrics (centipawn loss) with bias-aware statistical frameworks, we construct a probabilistic model to flag suspicious activity. We demonstrate how this system could be deployed on platforms like Chess.com and Lichess, significantly reducing undetected cheating while acknowledging theoretical limitations rooted in Gödelian undecidability.
This paper establishes a theoretical limitation in algorithmic bias detection inspired by Gödel's... more This paper establishes a theoretical limitation in algorithmic bias detection inspired by Gödel's incompleteness theorems. We formalize a hiring system comprising two Turing machines: one selecting candidates based on merit (Talent Assessor, TA) and another auditing for bias (Bias Auditor, TB). We prove that, over an infinite candidate stream, TB cannot definitively prove or disprove bias in TA's selections, even if protected attributes (e.g., race, gender) correlate perfectly with outcomes. Using tools from probability theory and mathematical logic, we demonstrate that hypothesis testing for bias becomes undecidable in the limit, mirroring Gödel's results on the inherent incompleteness of formal systems. Implications for algorithmic fairness and hiring practices are discussed.
The quest for Artificial General Intelligence (AGI) remains one of the most intriguing and challe... more The quest for Artificial General Intelligence (AGI) remains one of the most intriguing and challenging pursuits in the field of AI. This article delves into the theoretical frameworks offered by Professor Igor Aleksander’s concept of synthetic consciousness and informational machines, juxtaposed with Sir Roger Penrose’s quantum mechanical perspective on mind and consciousness, notably presented in “The Emperor’s New Mind.” We critically analyze the feasibility of achieving AGI by exploring these theories, examining the potential for machines to not only simulate but genuinely experience consciousness and qualia, thus redefining our understanding of intelligence.
This journal explores the integration of Artificial Intelligence (AI) with smart contract- enable... more This journal explores the integration of Artificial Intelligence (AI) with smart contract- enabled blockchain technology to create a decentralized and robust platform for modeling microeconomic activity and assessing the economic health of a country. By leveraging AI’s predictive capabilities and blockchain’s transparency, immutability, and automation, this solution addresses inefficiencies in traditional economic models and enhances real-time analysis of consumer behavior, production efficiency, and market dynamics. Additionally, the paper emphasizes the importance of grounding this framework in the rich history of economic thought, using foundational theories to refine the proposed system. This historical perspective ensures that the integration of emerging technologies respects and builds upon the intellectual traditions of economics. The paper outlines the theoretical framework, technological architecture, historical context, practical applications, and implications for policymakers, economists, and researchers. ---
This paper explores the integration of Artificial Intelligence (AI) and a blockchain-of-blockchai... more This paper explores the integration of Artificial Intelligence (AI) and a blockchain-of-blockchains architecture to model and manage macroeconomic activity on a global scale. By connecting microeconomic smart contracts from individual countries into a decentralized macroeconomic framework, the system enables robust global economic analysis, real-time policy simulation, and coordination of monetary policies. The blockchain-of-blockchains approach allows each country to maintain sovereignty over its microeconomic data while contributing to a global economic network governed by decentralized consensus. We propose a digital global reserve currency, managed by AI and smart contracts, to facilitate international trade and stabilize financial systems. The journal also emphasizes the importance of incorporating historical economic thought to maintain theoretical and ethical integrity. This system has profound implications for global monetary policy, economic health assessment, and cooperative governance. ---
The world of chess has long relied on rating systems to quantify player skill and facilitate fair... more The world of chess has long relied on rating systems to quantify player skill and facilitate fair competition. However, the traditional Elo rating system, although widely used and respected, has its limitations. This paper proposes an innovative approach to chess ratings called Chess Futures, which utilizes three measures to triangulate a more accurate and responsive chess rating. Each measure carries equal weight and aims to capture different aspects of player performance. The first measure incorporates a normal distribution, similar to FIDE Elo but independent of it. The second measure employs stochastic modeling to account for the inherent uncertainty in chess outcomes. Finally, the third measure introduces a tournament performance rating (TPR) that reflects a player's growth over time. Chess Futures represents a significant departure from traditional rating systems, offering a reimagined approach to chess rating that addresses the shortcomings of existing methodologies.
Journal of Qualitative Research in Sports Studies, 2023
The world of chess has long relied on rating systems to quantify player skill and facilitate fair... more The world of chess has long relied on rating systems to quantify player skill and facilitate fair competition. However, the traditional Elo rating system, although widely used and respected, has its limitations. This paper proposes an innovative approach to chess ratings called Chess Futures, which utilizes three measures to triangulate a more accurate and responsive chess rating. Each measure carries equal weight and aims to capture different aspects of player performance. The first measure incorporates a normal distribution, similar to FIDE Elo but independent of it. The second measure employs stochastic modeling to account for the inherent uncertainty in chess outcomes. Finally, the third measure introduces a tournament performance rating (TPR) that reflects a player's growth over time. Chess Futures represents a significant departure from traditional rating systems, offering a reimagined approach to chess rating that addresses the shortcomings of existing methodologies.
The world of chess has long relied on rating systems to quantify player skill and facilitate fair... more The world of chess has long relied on rating systems to quantify player skill and facilitate fair competition. However, the traditional Elo rating system, although widely used and respected, has its limitations. This paper proposes an innovative approach to chess ratings called Chess Futures, which utilizes three measures to triangulate a more accurate and responsive chess rating. Each measure carries equal weight and aims to capture different aspects of player performance. The first measure incorporates a normal distribution, similar to FIDE Elo but independent of it. The second measure employs stochastic modeling to account for the inherent uncertainty in chess outcomes. Finally, the third measure introduces a tournament performance rating (TPR) that reflects a player's growth over time. Chess Futures represents a significant departure from traditional rating systems, offering a reimagined approach to chess rating that addresses the shortcomings of existing methodologies.
Uploads
Papers by Michael Mkpadi