## Section 3.4 Mixed Strategies: Expected Value Solution

¶In this section, we will use the idea of expected value to find the equilibrium mixed strategies for repeated two-person zero-sum games.

One of the significant drawbacks of the graphical solution from the previous sections is that it can only solve 2 X 2 matrix games. If each player has 3 options, we would need to graph in three dimensions. Technically this is possible, but rather complicated. If each player has more than 3 options, since we can't graph in four or more dimensions, we are at a complete loss. So we need to think about an alternate way to solve for the mixed strategies. Although we will begin with 2 X 2 games, this method will easily generalize to larger games.

###### Example 3.4.1. Matching Pennies Game.

Consider the game in which each player can choose HEADS (H) or TAILS (T); if the two players match, Player 1 wins; if the two players differ, Player 2 wins. What strategy should each player play?

###### Exercise 3.4.2. Payoff matrix.

Determine the payoff matrix for the Matching Pennies game.

###### Exercise 3.4.3. Pure strategy equilibria.

Explain why the Matching Pennies game has no pure strategy equilibrium point.

###### Exercise 3.4.4. Conjecture a mixed strategy.

Since we know that there is no pure strategy equilibrium point, we need to look for a mixed strategy equilibrium point. Just by looking at the payoff matrix for Matching Pennies, what do you think an ideal strategy for each player would be? Explain your choice.

###### Exercise 3.4.5. Expected value of conjecture.

Suppose both players play your ideal strategy in the Matching Pennies game, what should the expected value of the game be?

We could use our previous graphical method to determine the expected value of the game (you might quickly try this just to verify your prediction). However, as we have noted, a major drawback of the graphical solution is that if our players have 3 (or more) options, then we would need to graph an equation in 3 (or more!) variables; which, I hope you agree, we don't want to do. Although we will continue to focus on \(2 \times 2\) games, we will develop a new method which can more easily be used to solve to larger games.

We will need a little notation. Let

Also, we will let \(E_1(H)\) be the expected value for Player 1 playing pure strategy H against a given strategy for Player 2. Similarly, \(E_2(H)\) will be Player 2's expected value for playing pure strategy H.

###### Exercise 3.4.6. The \((60, 40)\) strategy for Player 2.

Suppose Player 2 plays H 60% of the time and T 40% of the time.

What are \(P_2(H)\) and \(P_2(T)\text{?}\)

What do you think Player 1 should do? Does this differ from your ideal mixed strategy in Exercise 3.4.4? Explain.

We can use expected value to compute what Player 1 should do in response to Player 2's 60/40 strategy. First, consider a pure strategy for Player 1. Compute the expected value for Player 1 if she only plays H while Player 2 plays H with probability .6 and T with probability .4. This expected value is \(E_1(H)\text{,}\) above.

Similarly, compute the expected value for Player 1 if she plays only T (call it \(E_1(T)\)).

Which pure strategy has a higher expected value for Player 1? If Player 1 plays this pure strategy, will she do better than your predicted value of the game?

###### Exercise 3.4.7. The \((60, 40)\) strategy is not ideal for Player 2.

Hopefully, you concluded that in Exercise 3.4.6 a pure strategy is good for Player 1. Explain why this means the 60/40 strategy is bad for Player 2.

###### Exercise 3.4.8. When to play H.

For any particular mixed (or pure) strategy of Player 2, we could find \(E_1(T)\) and \(E_1(H)\text{.}\) Explain why if \(E_1(H) > E_1(T)\text{,}\) Player 1 should always play H.

###### Exercise 3.4.9. When to play T.

Similarly, explain why if \(E_1(H) \lt E_1(T)\text{,}\) Player 1 should always play T.

###### Exercise 3.4.10. Player 2 is unhappy when Player 1's expected values are unequal.

Explain why the situations in Exercise 3.4.8 and Exercise 3.4.9 are bad for Player 2.

###### Exercise 3.4.11. Equal expected values are better.

Use your answers from Exercise 3.4.8, Exercise 3.4.9, and Exercise 3.4.10 to explain why the situation in which \(E_1(H)=E_1(T)\) is the best for Player 2.

From Exercise 3.4.11 we now know that Player 2 wants \(E_1(H)=E_1(T)\text{,}\) we can use a little algebra to compute the best defensive strategy for Player 2. Remember, we want to assume that Player 1 will always be able to chose the strategy that will be best for her, and thus Player 2 wants to protect himself. Let's find the probabilities with which Player 2 should play H and T.

###### Exercise 3.4.12. Equations for Player 1's expected values.

Let \(P_2(H)\) and \(P_2(T)\) be the probabilities that Player 2 plays H and T respectively. Find equations for \(E_1(H)\) and \(E_1(T)\) in terms of \(P_2(H)\) and \(P_2(T)\) for the game of Matching Pennies. Since we want \(E_1(H)=E_1(T)\text{,}\) set your two equations equal to each other. This gives you one equation in terms of \(P_2(H)\) and \(P_2(T)\text{.}\)

###### Exercise 3.4.13. The sum equation.

Explain why we must also have \(P_2(H)+P_2(T)=1\text{.}\)

So in general, to solve for Player 2's strategy, we want to write an equation for each of Player 1's pure strategy expected values in terms of Player 2's probabilities. For example, \(E_1(H)\) and \(E_1(T)\) in terms of variables \(P_2(H)\) and \(P_2(T)\text{.}\) We then set the expected values equal to each other. We now have an equation just in terms of Player 2's probabilities.

In order to solve for the probabilities, we also need to use the fact that Player 2's probabilities sum to 1. For example, \(P_2(H)+P_2(T)=1\text{.}\) For a \(2 \times 2\) game, you now have 2 equations with 2 unknowns (\(P_2(H)\) and \(P_2(T)\)). Use an algebraic method such as substitution or elimination to solve the system of equations.

###### Exercise 3.4.14. Solve for Player 2's probabilities.

Using the equations from Exercise 3.4.12 and Exercise 3.4.13, solve for \(P_2(H)\) and \(P_2(T)\text{.}\) You now have the equilibrium mixed strategy for Player 2. Does this match the mixed strategy you determined in Exercise 3.4.4?

Now can you use a similar process to find Player 1's strategy? Whose expected values should you use? Whose probabliities?

###### Exercise 3.4.15. Find Player 1's probabilities.

Set up and solve the analogous equations from Exercise 3.4.12 and Exercise 3.4.13 for Player 1. Does this match the mixed strategy from Exercise 3.4.4?

We should have an equation for \(E_2(H)\) and one for \(E_2(T)\text{.}\) Since we are looking for the probabilities of each of Player 1's options, the equations should involve \(P_1(H)\) and \(P_1(T)\text{.}\)

We now have a new method for finding the best mixed strategies for Players 1 and 2, assuming that each player is playing defensively against an ideal player. But how can we find the value of the game? For Player 2, we assumed \(E_1(H)=E_1(T)\text{.}\) In other words, we found the situation in which Player 1's expected value is the same no matter which pure strategy she plays. Thus, Player 1 is *indifferent* to which pure strategy she plays. If she were not indifferent, then she would play the strategy with a higher expected value. But, as we saw, this would be bad for Player 2. So Player 1 can assume that Player 2 will play the equilibrium mixed strategy. Thus, as long as Player 1 plays a mixed strategy, it doesn't matter whether at any given time, she plays H or T (this is the idea that she is indifferent to them). Hence, the expected value of the game for Player 1 is the same as \(E_1(H)\text{,}\) which is the same as \(E_1(T)\text{.}\) Similarly, we find that the expected value of the game for Player 2 is \(E_2(H)\) (or \(E_2(T)\)). This is a pretty complicated idea. You may need to think about it for a while. In the meantime, use the probabilities you found for each player and the equations for \(E_1(H)\) and \(E_2(H)\) to find the value of the game for each player.

###### Exercise 3.4.16. Find Player 1's expected value of the game.

Use the probabilities you calculated in Exercise 3.4.14 to find \(E_1(H)\text{,}\) and hence the expected value of the game for Player 1. How does this compare to your prediction for the value of the game that you gave in Exercise 3.4.5?

###### Exercise 3.4.17. Find Player 2's expected value of the game.

Use the probabilities you calculated in Exercise 3.4.15 to find \(E_2(H)\text{,}\) and hence the expected value of the game for Player 2. How does this compare to your prediction for the value of the game that you gave in Exercise 3.4.5?

Keep practicing with the expected value method on some other games!

###### Exercise 3.4.18. Solve a \(2\times 2\) repeated game using expected values.

Apply this method of using expected value to Example 3.1.1 (which we solved using the graphical method in the previous section) to find the equilibrium mixed strategies for each player and the value of the game for each player:

###### Exercise 3.4.19. Expected value solution for Rock-Paper-Scissors.

As we noted in this section, this method can be used to solve bigger games. We just have more equations. Use the expected value method to find the equilibrium mixed strategy for Rock-Paper-Scissors for Player 2.

You will need to set \(E_1(R)=E_1(P)\) and \(E_1(P)=E_1(S)\text{;}\) solve for \(P_2(R), P_2(P), P_2(S)\text{;}\) etc. It should be very similar to what you did for Matching Pennies, but there will be three equations and three unknowns.

If you found this last exercise to be algebraically arduous, don't worry, we will eventually use technology to help us solve problems with several variables.