Formulate a problem as a game
Describe and compare the basic concepts in game theory
Normal-form game, extensive-form game
Zero-sum game, general-sum game
Pure strategy, mixed strategy, support, best response, dominance
Dominant strategy equilibrium, Nash equilibrium, Stackelberg equilibrium
Describe iterative removal algorithm
Compute equilibria for bimatrix games
Pure strategy Nash equilibrium
Mixed strategy Nash equilibrium
Stackelberg equilibrium
Understand the voting model
Find the winner under the following voting rules:
Plurality
Borda count
Plurality with runoff
Single Transferable Vote
Describe the following concepts, axioms, and properties of voting rules
Pairwise election, Condorcet winner
Majority consistency, Condorcet consistency, Strategy-proof
Dictatorial, constant, onto
Understand the possibility of satisfying multiple properties
Describe the greedy algorithm for voting rule manipulation
A game is defined as: any set of circumstances that has a result dependent on the actions of two or more decision-makers. There are multiple types of games, as outlined below:
Normal Form: A normal form game consists of the following elements
A set of players
A set of possible actions for each player
Payoff/Utility functions which give the utility for a player given the actions chosen by all players
A Bimatrix Game is a special case with 2 players and finite action sets
In normal form games, players move simultaneously and the game ends immediately after players move.
Extensive Form: In this game type, players can move sequentially, allowing for a game tree structure to represent it. This type of game also allows for representation of incomplete information.
Zero Sum: In this game type, the utilities for the players will always sum to 0 or some constant c, no matter what actions are chosen.
General Sum: The sum of player utilities is not constant and depends on the actions of each player.
Just as games can be characterized, we also want ways to describe the strategy of a player, or what that player would do in any given situation.
Pure Strategy: choose action deterministically
Mixed Strategy: choose action according to some probability distribution. In this case, we can find expected utility for a player by summing over all action profiles and multiplying the probability of that profile by the utility.
Support: set of actions chosen with non-zero probability
Suppose player \(i\) has two strategies: \(s_i\)
and \(s_i'\).
We say that \(s_i\) is a strictly/very weakly/weakly dominant
strategy if it strictly/very weakly/weakly dominates \(s_i'\),
no matter
what strategy the other player has.
\(s_i\) strictly dominates \(s_i'\) (\(s_i\) is always better than
\(s_i'\))
\(u_i(s_i,s_j) > u_i(s_i', s_j) \forall s_j\)
\(s_i\) very weakly dominates \(s_i'\) (\(s_i\) is never worse than
\(s_i'\))
\(u_i(s_i,s_j) \geq u_i(s_i', s_j) \forall s_j\)
\(s_i\) weakly dominates \(s_i'\) (\(s_i\) is never worse, and
sometimes
better, than \(s_i'\))
\(u_i(s_i,s_j) \geq u_i(s_i', s_j) \forall s_j\), and \(\exists s_j, u_i(s_i, s_j) > u_i(s_i', s_j)\)
\(S_i\) refers to all possible strategies for player \(i\). Note that if \(s_i\) strictly dominates \(s_i’\) \(\forall s_i’ \in S_i \setminus \{s_i\}\) (i.e., strictly dominates all other possible strategies), then \(s_i\) is a best response to \(s_{-i}\), \(\forall s_{-i}\). In other words, \(s_i\) leads to the highest utility, and is a best response.
A solution concept is a formal rule for predicting how a game will be played. They allow us to understand how one player should play, and what we should expect all the players to play. We will discuss on the following solution concepts:
Dominant Strategy Equilibrium
Nash Equilibrium
Minimax Strategy
Maximin Strategy
Stackelberg Equilibrium
This is achieved when every player plays a dominant strategy. For example, in the prisoner's dilemma scenario covered in lecture, looking at the payoff matrix one can see that a dominant strategy equilibrium is possible when both players defect.
In a Nash Equilibrium, every player’s strategy is a best response to the other players’ strategy profile (their combination of strategies). In other words, one cannot gain by unilateral deviation.
In this class, we will focus on
Pure Strategy Nash Equilibrium (PSNE): \(a_i \in BR(\mathbf{a}_{-i}), \forall i\)
Mixed Strategy Nash Equilibrium (MSNE): At least one player uses a randomized strategy; \(s_i \in BR(\mathbf{s}_{-i}), \forall i\)
A Nash Equilibrium always exists in finite games.
Enumerate all action profiles. For each action profile, check if it is NE
For each player, check other available actions to see if the player should deviate
Recall that a strictly dominated strategy is one that is always worse than some other strategy.
Strictly dominated strategies cannot be part of a Nash Equilibrium. This is the motivating idea behind
finding a PSNE
through Iterative Removal.
Remove strictly dominated actions (pure strategies) and then find PSNE in the remaining game.
Note that there can be new strictly dominated actions in the remaining game
Repeat the process until no actions can be removed
If the remaining game has only one action for each player, then that is the unique NE of the game and the game is called dominance solvable
It may not necessarily be a dominant strategy equilibrium
Else, if the remaining game has more than one action for some players, find PSNE in the remaining game
Note that the order of removal does not matter
Example
Suppose we have the following:
Now, we can find a PSNE using iterative removal. We can begin by asking ourselves: Which are the strictly dominated strategies for player 1? How about player 2?
For player two, R is strictly dominated by C. Thus, we can remove the column for R.
For player one, D is strictly dominated by M. Thus, we can remove the row for D.
For player two, L is strictly dominated by C. Thus, we can remove the column for L.
For player one, U is strictly dominated by M. Thus, we can remove the row for U.
Finally, we have found a PSNE, where player 1 always chooses action M and player 2 always chooses action C.
We can still apply iterative removal to find a MSNE (Note that the removed strategies cannot be part of any NE).
Example
Suppose we have the following after applying iterative removal:
Let \(s_A = (p, 1-p)\) and \(s_B = (q, 1-q)\) with \(0 < p, q < 1\).
In other words, suppose Alex chooses Football with probability \(p\) and
chooses
Concert with probability \(1-p\). Suppose that Berry chooses Football with
probability \(q\) and chooses Concert with probability \(1-q\).
\(s_A\) and \(s_B\) are a NE when
\(u_A(s_A, s_B) \geq u_A((\epsilon, 1 - \epsilon), s_B) ~\forall \epsilon\)
\(u_B(s_A, s_B) \geq u_B(s_A, (\epsilon, 1 - \epsilon)) ~\forall \epsilon\)
If Alex and Berry have no incentive to deviate (i.e. \(u_A\) does not increase, \(u_B\) does not increase), then we have found an MSNE.
To find our MSNE, we will find \(p\) and \(q\) with \(0 < p, q < 1\).
By definition of Nash equilibrium, if \(s_A \in BR(s_B)\), then all
actions in \(s_A\) with non-zero probability are in \(BR(s_B)\).
Thus, \(F \in BR(s_B)\) and \(C \in
BR(s_B)\). This
gives us: \[\begin{align*}
u_A(F, s_B) &= u_A(C, s_B) \\
2 * q + 0 * (1-q) &= 0 * q + 1 * (1-q)
\end{align*}\] So, \(2q = 1 - q\), and we get \(q
= \frac{1}{3}\).
By definition of Nash equilibrium, if \(s_B \in BR(s_A)\), then all actions
in \(s_B\) with non-zero probability are in \(BR(s_A)\).
Thus, \(F \in BR(s_A)\) and \(C \in
BR(s_A)\). This
gives us: \[\begin{align*}
u_B(s_A, F) &= u_B(s_A, C) \\
1 * p + 0 * (1-p) &= 0 * p + 2 * (1-p)
\end{align*}\] So, \(p = 2(1-p)\), and we get \(p
= \frac{2}{3}\).
Both Minimax and Maximin strategies focus on a single player’s strategy.
Minimax Strategy: minimize best case expected utility of opponent (goal is to harm the
opponent)
Maximin Strategy: maximize worst case expected utility
Minimax Theorem: Minimax = Maximin = NE in 2-player zero-sum games (all NEs lead to the
same utility
profile in a 2-player zero sum game)
In a Stackelberg game, the leader commits to a strategy first, and the follower responds
after
observing the leader’s strategy.
In Stackelberg Equilibrium
follower best responds to leader strategy
leader commits to a strategy that maximizes the leader’s utility, assuming the follower best responds
in a Strong Stackelberg Equilibrium, we break ties in favor of the leader.
if the leader can only commit to a pure strategy, or if you know that the leader’s strategy in equilibrium is a pure strategy, then the equilibrium can be found by enumerating the leader’s pure strategy
in general, the leader can commit to a mixed strategy and \(u^{SSE} \geq u^{NE}\) (first mover advantage)
Social Choice
This is mathematical theory that deals with the aggregation of individual preferences. We will talk about social choice primarily in the context of voting.
Voting Model
A voting model consists of:
A set of voters \(\{1...,n\}\)
A set of candidates or alternatives \(A\) where \(|A| = m\)
In this model each voter has a ranking of the alternatives and the collective rankings of every voter is called the preference profile.
Voting Rules
Generally, a voting rule is defined as a function that maps preference profiles to alternatives that specifies the winner of an election.
Plurality
Plurality voting rule is a common and intuitive voting rule where each voter gives one point to their top alternative, and the alternative with the most points wins the election. One potential issue with this is that there can be ties between alternatives.
Borda Count
Recall that there are \(m\) alternatives, so in this voting scheme each voter assigns \(m - k\) many points to the kth alternative in their ranking. The alternative with the most points wins the election.
Example
In the example below, alternative B would win (with 5 points) using Borda Count while alternative A would win with plurality vote:
Plurality with Runoff
In this voting rule, the two top alternatives are selected via plurality count. Then, the winner is chosen by a pairwise election between the two top alternatives.
Candidate \(x\) will win over \(y\) in a pairwise election if the majority of voters prefer \(x\) over \(y\). For example, in the above picture, voters 1, 3, 4 prefer alternative b over c so b would win in a pairwise election.
Single Transferable Vote (STV)
This is a voting method actually in use in some countries (such as Australia and New Zealand), and even in some US municipalities (such as San Francisco)!
In this voting schema, there are \(m-1\) many rounds, and in each round, the alternative with the lowest plurality score is eliminated. The candidate remaining at the end wins.
Social Choice Axioms
These are axioms describing desirable properties in voting rules.
Majority Consistency
If voting rule is majority consistent, then if an alternative gets more than 50% of the vote then that alternative wins. Notably, the Borda Count voting rule is not majority consistent.
Condorcet Consistency
To talk about this property, we will first define Condorcet Winner.
A Condorcet Winner is the alternative that beats every other alternative in a pairwise election. Note that this does not always exist, notably it does not exist when there is a cycle of preferences (i.e. prefer alternative a to b, b to c, and c to a).
A voting rule is Condorcet Consistent if it picks the Condorcet winner if one exists as the final winner. Plurality and Borda Count are not Condorcet Consistent.
Strategy-Proofness
In some methods, voters can manipulate the outcome in their favor by changing their preferences.
Example
For example, in the following picture, voter 3 can change its preferences to ensure its first preference wins.
A voting rule is said to be strategy proof if a voter can never benefit from lying about her preferences, no matter what the other voters do.
Borda count and plurality with \(m\geq 3\) are not strategy proof. Following is an algorithm for letting the last voter manipulate the results of an election.
Greedy Algorithm for f-Manipulation
Given a voting rule \(f\) and the preference profiles for \(n-1\) voters, the last voter can report their preference such that a specific alternative \(y\) uniquely wins (no tie breaking).
Other Properties
Dictatorial: A voting rule is dictatorial if there is a voter who always gets her most preferred alternative
Constant: A voting rule is constant if the same alternative is always chosen (regardless of the stated preferences)
Onto: A voting rule is onto if any alternative can win for some set of stated preferences
Gibbard-Satterthwaite Theorem
If \(m \geq 3\), any voting rule that is strategy proof and onto is dictatorial.
Any voting rule that is onto and non-dictatorial is manipulatable
It is impossible to have a voting rule that is strategy proof, onto, and non-dictatorial