Notes to Epistemic Foundations of Game Theory
1. We are bracketing cases where the players can flip a coin or, more generally, randomize between a number of strategies.
2. Not all choice rules presuppose these representations of preferences and beliefs. Minmax, for instance, makes recommendations or predictions in cases where decision makers have no probabilistic beliefs about the states of the environment.
3. This does not mean that the player will know exactly what the other players will do in the game. There may be more than one “rational choice” or the other players may randomize.
4. A variant of this problem is the well-known sleeping beauty problem, which has bee extensively discussed in the philosophy literature. Of course, much of the discussion of the sleeping beauty problem found in the philosophy literature is relevant here; however, the issues surrounding the sleeping beauty problem is typically framed differently than we have done in this section. See Titelbaum (2013) for a survey and pointers to the relevant literature.
5. Recall that I am restricting attention to finite strategic games.
6. A strategy profile is a sequence of actions, one for each player
7. A partition of \(W\) is a pairwise disjoint collection of subsets of \(W\) whose union is all of \(W\). Elements of a partition \(\Pi\) on \(W\) are called cells, and for \(w\in W\), let \(\Pi(w)\) denote the cell of \(\Pi\) containing \(w\).
8. Given an equivalence relation \(\sim_i\) on \(W\), the collection
\[\Pi_i=\{[w]_i\mid w\in W\}\]is a partition. Furthermore, given any partition \(\Pi_i\) on \(W\),
\[\sim_i=\{(w,v)\mid v\in \Pi_i(w)\}\]is an equivalence relation with \([w]_i=\Pi_i(w)\).
9. Well-foundedness is only needed to ensure that for any set \(X\), \(Min_{\preceq_i}(X)\) is nonempty. This is important only when \(W\) is infinite.
10. This is only one of many possible choices here, but it is the most natural in this setting (cf. Liu 2011).
11. Some care needs to be taken when \(W\) is infinite, but these technical issues are not important for us at this point, so we restrict attention to finite sets of states.
12. The weighed component of maximization of expected utility makes it difficult to capture in relational structures or plausibility models. In this entry, maximization of expected utility is always referring to type spaces or epistemic-probability models.
13. The uniqueness of the payoffs at each outcome is only needed to ensure that there is a unique backward induction solution.
14. The models used by Samuelson differ from the ones presented in Section 2. In his model, each state is assigned a set of actions for each agent (rather than a single action). This formal detail is important for Samuelson’s main results, but is not crucial for the main point we are making here.
15. Recall the well-known distinction between “picking” and “choosing” from the seminal paper by Edna Ullmann-Margalit and Sidney Morgenbesser (1977).
16. Wlodeck Rabinovich (1992) takes this idea even further and argues that from the principle of indifference, players must assign equal probability to all choice-worthy options.
17. This same analysis applies to the other models discussed in Section 2.
18. The reasoning is that if there was a human intruder then the dog would bark and if a dog intruded then the cat would have howled.
19. Of course, one could move to different classes of models where monotonicity does not hold, for instance neighborhood models.