TR-2010007: Robust Knowledge of Rationality
2010
Sign up for access to the world's latest research
Abstract
Stalnaker provided an example of a perfect information game in which common knowledge of rationality does not yield backward induction. However, in his example, knowledge is treated as defeasible: players forfeit their knowledge of rationality at some vertices. This is not how ‘knowledge’ is understood in epistemology where, unlike belief, it is not subject to revision. In this respect, the Stalnaker example is a fit for ‘rationality and common belief of rationality’ rather than ‘common knowledge of rationality.’ In order to represent knowledge in the belief revision setting we introduce the notion of ‘robust knowledge’ which is maintained whenever possible during belief revision. We show that robust knowledge of Stalnaker rationality in games of perfect information yields backward induction.
Related papers
Notre Dame Journal of Formal Logic, 2008
We develop a logical system that captures two different interpreta- tions of what extensive games model, and we apply this to a long-standing debate in game theory between those who defend the claim that common knowledge of rationality leads to backward induction or subgame perfect (Nash) equilib- ria and those who reject this claim. We show that a defense of the claim à la Aumann (1995) rests on a conception of extensive game playing as a one-shot event in combination with a principle of rationality that is incompatible with it, while a rejection of the claim à la Reny (1988) assumes a temporally extended, many-moment interpretation of extensive games in combination with implausi- ble belief revision policies. In addition, the logical system provides an original inductive and implicit axiomatization of rationality in extensive games based on relations of dominance rather than the usual direct axiomatization of rationality as maximization of expected utility.
gtcenter.org
We propose a logical system in which a notion of the structure of a game is formally defined and the meaning of sequential rationality is formulated. We provide a set of decision criteria which, given sufficiently high order of mutual belief of the game structure and of every player following these criteria, entails Backward Induction decisions in generic perfect information games. We say that a player is rational if the player follows these criteria in his/her decisions. The set of mutual beliefs is also necessary, in the sense that any mutual belief of lower order can not entail the Backward Induction decisions. These conditions are determined by the length of the game structure, and they are never involved with common belief. Moreover, we give a set of epistemic conditions for subgame perfect equilibria for any perfect information game, which requires every player follow these decision criteria and there be mutual belief of the the equilibrium strategy and of the game structure.
Archive for Mathematical Logic, 2021
In strategic situations, agents base actions on knowledge and beliefs. This includes knowledge about others' strategies and preferences over strategy profiles, but also about other external factors. Bernheim and Pearce in 1984 independently defined the game theoretic solution concept of rationalizability, which is built on the premise that rational agents will only take actions that are the best response to some situation that they consider possible. This accounts for other agents' rationality as well, limiting the strategies to which a particular agent must respond, enabling further elimination until the strategies stabilize. We seek to generalize rationalizability to account not only for actions, but knowledge of the world as well. This will enable us to examine the interplay between action based and knowledge based rationality. We give an account of what it means for an action to be rational relative to a particular state of affairs, and in turn relative to a state of knowledge. We present a class of games, Epistemic Messaging Games (EMG), with a communication stage that clarifies the epistemic state among the players prior to the players' actions. We use a history based model, which frames individual knowledge in terms of local projections of a global history. With this framework, we give an account of rationalizability for subclasses of EMG.
Research in Economics, 1999
We use a universal, extensive form interactive beliefs system to provide an epistemic characterization of a weak and a strong notion of rationalizability with independent beliefs. The weak solution concept is equivalent to backward induction in generic perfect information games where no player moves more than once in any play. The strong solution concept is related to explicability and is outcome equivalent to backward induction in generic games of perfect information.
Research in Economics, 1999
We use a universal, extensive form interactive beliefs system to provide an epistemic characterization of a weak and a strong notion of rationalizability with independent beliefs. The weak solution concept is equivalent to backward induction in generic perfect information games where no player moves more than once in any play. The strong solution concept is related to explicability and is outcome equivalent to backward induction in generic games of perfect information.
Synthese, 2009
Aumann has proved that common knowledge of substantive rationality implies the backward induction solution in games of perfect information. Stalnaker has proved that it does not. The jury is still out concerning the epistemic conditions for backward induction, the "oldest idea in game theory" (Aumann, 1995, p. 635). and take conflicting positions in the debate: the former claims that common "knowledge" of "rationality" in a game of perfect information entails the backwardinduction solution; the latter that it does not. 1 Of course there is nothing wrong with any of their relevant formal proofs, but rather, as pointed out by Halpern , there are differences between their interpretations of the notions of knowledge, belief, strategy and rationality. Moreover, as pointed out by ) and others, the reasoning underlying the backward induction method seems to give rise to a fundamental paradox: in order even to start the reasoning, a player assumes that (common knowledge of, or some form of common belief in) "rationality" holds at all the last decision nodes (and so the obviously irrational leaves are eliminated); but then, in the next reasoning step (going backward along the tree), some of these (last) decision nodes are eliminated, as being incompatible with (common belief in) "rationality"! Hence, the assumption behind the previous reasoning step is now undermined: the reasoning player can now see, that if those decision nodes that are now declared "irrational" were ever to be reached, then the only way that this could happen is if (common belief in) "rationality" failed. Hence, she was wrong to assume (common belief in) "rationality" when she was reasoning about the choices made at those last decision nodes. This whole line of arguing seems to undermine itself!
International Game Theory Review, 2007
Game-theoretic solution concepts describe sets of strategy profiles that are optimal for all players in some plausible sense. Such sets are often found by recursive algorithms like iterated removal of strictly dominated strategies in strategic games, or backward induction in extensive games. Standard logical analyses of solution sets use assumptions about players in fixed epistemic models for a given game, such as mutual knowledge of rationality. In this paper, we propose a different perspective, analyzing solution algorithms as processes of learning which change game models. Thus, strategic equilibrium gets linked to fixed-points of operations of repeated announcement of suitable epistemic statements. This dynamic stance provides a new look at the current interface of games, logic, and computation.
International Game Theory Review, 2007
Game-theoretic solution concepts describe sets of strategy profiles that are optimal for all players in some plausible sense. Such sets are often found by recursive algorithms like iterated removal of strictly dominated strategies in strategic games, or backward induction in extensive games. Standard logical analyses of solution sets use assumptions about players in fixed epistemic models for a given game, such as mutual knowledge of rationality. In this paper, we propose a different perspective, analyzing solution algorithms as processes of learning which change game models. Thus, strategic equilibrium gets linked to fixed-points of operations of repeated announcement of suitable epistemic statements. This dynamic stance provides a new look at the current interface of games, logic, and computation.
The jury is still out concerning the epistemic conditions for backward induction, the "oldest idea in game theory" ([2, p. 635]). Aumann and Stalnaker [31] take contradictory positions in the debate: Aumann claims that common 'knowledge' of 'rationality' in a game of perfect information entails the backward-induction solution; Stalnaker that it does not. 1 Of course there is nothing wrong with any of their relevant formal proofs, but rather, as pointed out by Halpern [22], there are differences between their interpretations of the notions of knowledge, belief, strategy and rationality. Moreover, as pointed out by Binmore , Bonanno [17], Bicchieri [13], Reny [26], Brandenburger [18] and others, the reasoning underlying the backward induction method seems to give rise to a fundamental paradox (the so-called "BI paradox"): in order to even start the reasoning, a player assumes that (common knowledge, or some form of common belief in) Rationality holds at all the last decision nodes (and so the obviously irrational leaves are eliminated); but then, in the next reasoning step (going backward along the tree), some of these (last) decision nodes are eliminated, as being incompatible with (common belief in) Rationality! Hence, the assumption behind the previous reasoning step is now undermined: the reasoning player can now see, that if those decision nodes that are now declared "irrational" were ever to be reached, then the only way that this could happen is if (common belief in) Rationality failed. Hence, she was wrong to assume (common belief in) Rationality when she was reasoning about the choices made at those last decision nodes. This whole line of arguing seems to undermine itself! In this paper we use as a foundation the relatively standard and well-understood setting of Conditional Doxastic Logic (CDL,), and its "dynamic" version (obtained by adding to CDL operators for truthful public announcements [!ϕ]ψ): the logic PAL-CDL, introduced by Johan van Benthem . In fact, we consider a slight ex-1 Others agree with Stalnaker in disagreeing with Aumann: for example, Samet and Reny [26] also put forwards arguments against Aumann's epistemic characterisation of subgame-perfect equilibrium. Section 5 is devoted to a discussion of related literature. tension of this last setting, namely the logic APAL-CDL, obtained by further adding dynamic operators for arbitrary announcements [!]ψ, as in [3]). We use this formalism to capture a novel notion of "dynamic rationality" and to investigate its role in decision problems and games. As usual in these discussions, we take a deterministic stance, assuming that the initial state of the world at the beginning of the game already fully determines the future play, and thus the unique outcome, irrespective of the players' (lack of) knowledge of future moves. We do not, however, require that the state of the world determines what would happen, if that state were not the actual state. That is, we do not need to postulate the existence of any "objective counterfactuals". But instead, we only need "subjective counterfactuals": in the initial state, not only the future of the play is specified, but also the players' beliefs about each other, as well as their conditional beliefs, pre-encoding their possible revisions of belief. The players' conditional beliefs express what one may call their "propensities", or "dispositions", to revise their beliefs in particular ways, if given some particular pieces of new information.
2021
The dominant theories of rational decision making assume what we will call logical omniscience. That is, they assume that when facing a decision problem, an agent can perform all relevant computations and determine the truth value of all relevant logical/mathematical claims. This assumption is unrealistic when, for example, we offer bets on remote digits of π or Goldbach’s conjecture; or when an agent faces a computationally intractable planning problem. Furthermore, the assumption of logical omniscience creates contradictions in cases where the environment can contain descriptions of the agent itself. Importantly, strategic interactions as studied in game theory are decision problems in which a rational agent is predicted by its environment (the other players). In this paper, we develop a theory of rational decision making that does not assume logical omniscience. We consider agents who repeatedly face decision problems (including ones like betting on Goldbach’s conjecture or games...

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (9)
- R. Aumann. Backward Induction and Common Knowledge of Rationality. Games and Economic Behavior, 8:6-19, 1995.
- P. Battigalli, A. Friedenberg. Context-Dependent Forward Induction Reasoning. Working Paper n. 351, IGIER Università Bocconi, Milano Italy August 2009.
- A. Brandenburger, A. Friedenberg. Self-admissible sets. Journal of Economic Theory, 145(2):785-811, 2010.
- J. Halpern. Substantive Rationality and Backward Induction. Games and Economic Behavior, 37:425-435, 2001.
- L. Newman. Descartes' Epistemology. Stanford Encyclopedia of Philosophy, 2005.
- L. Newman, A. Nelson. Circumventing Cartesian Circles. Noûs, 33:370-404, 1999.
- R. Nozick. Philosophical Explanations. Philosophical Explanations. Cambridge: Har- vard University Press, 1981.
- R. Stalnaker. Belief revision in games: forward and backward induction. Mathematical Social Sciences, 36:31-56, 1998.
- M. Steup. Epistemology. Stanford Encyclopedia of Philosophy, 2005.