## Section 3.1 Introduction to Repeated Games

¶Now that we are experts at finding equilibrium pairs, what happens when a game doesn't have any equilibrium pairs? What should our players do?

###### Example 3.1.1. A \(2\times 2\) Repeated Game.

Consider the following zero-sum game

Does this game have an equilibrium pair? Play this game with an opponent 10 times. Tally your wins and losses. Describe how you chose which strategy to play. Describe how your opponent chose which strategy to play.

When playing the game several times, does it make sense for either player to play the same strategy all the time? Why or why not?

Although we use the term “strategy” to mean which row (or column) a player chooses to play, we will also refer to how a player plays a repeated game as the player's strategy. In order to avoid confusion, in repeated games we will define some specific strategies.

###### Definition 3.1.2.

In a repeated game, if a player always plays the same row (or column), we say that she is playing a pure strategy.

For example, if Player 1 always plays Row A, we say she is playing *pure strategy A*.

###### Definition 3.1.3.

If a player varies which row (or column) he plays, then we say he is playing a mixed strategy.

For example, if a player plays Row A 40% of the time and Row B 60% of the time, we will say he is playing a (.4, .6) strategy, as we generally use the probability rather than the percent. The probabilities of each strategy will be listed in the same order as the strategies in the matrix.

It is not enough just to determine how often to play a strategy. Suppose Player 1 just alternates rows in Example 3.1.1. Can Player 2 “out-guess” Player 1? What might be a better way for Player 1 to play?

We'd really like to find a way to determine the best mixed strategy for each player in repeated games. Let's start with what we already know: games with equilibrium points. If a game has an equilibrium pair, would a player want to play a mixed strategy? Recall that a strategy pair is an equilibrium pair if neither player gains by switching strategy.

###### Example 3.1.4. Repeating a Game with an Equilibrium.

Consider the following zero-sum game

This game has an equilibrium pair. Convince yourself that if this game is played repeatedly, each player should choose to play a pure strategy.

Thus, if the game has an equilibrium we know that players will play the pure strategies determined by the equilibrium pairs. So let's get back to thinking about games without equilibrium pairs. If we play such a game once, can we predict the outcome? What about if we repeat the game several times– can we predict the outcome? Think about tossing a coin. If you toss it once, can you predict the outcome? What if you toss it 100 times– can you predict the outcome? Not exactly, but we can say what we *expect*: if we toss a coin 100 times we expect to have half of the coins turn up heads and half turn up tails. This may not be the *actual* outcome, but it is a reasonable prediction. Now is a good time to remind yourself about finding the *expected value*!!

Recall the familiar game *Rock-Paper-Scissors*: ROCK beats SCISSORS, SCISSORS beat PAPER, and PAPER beats ROCK. Using the payoff matrix and experimentation, we will try to determine the best strategy for this game.

###### Exercise 3.1.5. RPS payoff matrix.

Construct a game matrix for Rock-Paper-Scissors.

###### Exercise 3.1.6. RPS and equilibrium points.

Is Rock-Paper-Scissors a zero-sum game? Does it have an equilibrium point? Explain.

###### Exercise 3.1.7. Play RPS.

We want to look at what happens if we repeat Rock-Paper-Scissors. Play the game ten times with an opponent. Record the results (list strategy pairs and payoffs for each player).

###### Exercise 3.1.8. Conjecture a strategy.

Describe any strategy you used in Exercise 3.1.7.

###### Exercise 3.1.9. Strengths and weaknesses of your strategy.

Reflect on your chosen strategy. Does it guarantee you a “win”? What should it mean to “win” in a repeated game? What are the strengths and weaknesses of your strategy?

###### Exercise 3.1.10. Share your strategy.

Although you may have come up with a good strategy, let's see if we can't decide what the “best” strategy should be for Rock-Paper-Scissors. Let's assume we are playing Rock-Paper-Scissors against the smartest player to ever live. We will call such an opponent the “perfect” player.

###### Exercise 3.1.11. The weakness of a pure strategy.

Explain why it is not a good idea to play a pure strategy; i.e., to play only ROCK, only PAPER, or only SCISSORS.

###### Exercise 3.1.12. An uneven strategy.

Does it make sense to play one option more often than another (for example, ROCK more often than PAPER)? Explain.

###### Exercise 3.1.13. Frequency of R, P, S.

How often should you play each option?

###### Exercise 3.1.14. Playing a pattern.

Do you want to play in a predictable pattern or randomly? What are some advantages and disadvantages of a pattern? What are some advantages and disadvantages of a random strategy?

Hopefully, you concluded that the best strategy against our perfect player would be to play ROCK, PAPER, SCISSORS 1/3 of the time each, and to play randomly. We can say that our strategy is to play each option randomly with a probability of 1/3, and call this the Random(1/3, 1/3, 1/3) strategy.

###### Exercise 3.1.15. Long-term payoff.

Using this “best” strategy, what do you predict the long term payoff will be for Player 1? For Player 2?

###### Exercise 3.1.16. Testing the Random(1/3, 1/3, 1/3) strategy.

Let's check our prediction. Using a die, let 1 and 2 represent ROCK, 3 and 4 represent PAPER, and 5 and 6 represent SCISSORS. Play the game 20 times with someone in class where each player rolls to determine the choice of ROCK, PAPER, or SCISSORS. Keep track of the strategy pairs and payoffs. What was the total payoff for each player? (At this point, if you still feel that you have a better strategy, try your strategy against the random one– see what happens!)

###### Exercise 3.1.17. Compare to your prediction.

How did the actual outcome compare to your predicted outcome? What do you expect would happen if you play the game 100 times? (Or more?)

Using ideas about probability and expected value we can more precisely answer Exercise 3.1.17.

###### Exercise 3.1.18. Probabiliies when both players play the random strategy.

Assume both players are using the Random(1/3, 1/3, 1/3) strategy. List all the possible outcomes for a single game (recall the outcome is the strategy pair and the payoff, for example [R, P], \((-1, 1)\)). What is the probability that any particular pair of strategies will be played? Are the strategy pairs equally likely?

###### Exercise 3.1.19. Expected value.

Using the probabilities and payoffs from Exercise 3.1.18 calculate the expected value of the game for each player.

###### Exercise 3.1.20. Strategy for the repeated \(2\times 2\) game.

Now consider the matrix from Example 3.1.1 above:

See if you can determine how often Player 1 should play each row, and how often Player 2 should play each column. Try testing your proposed strategy (you may be able to use a variation on the dice as we saw in Exercise 3.1.16). Write up any conjectured strategies and the results from playing the game with your strategy. Do you think you have come up with the best strategy? Explain.

You may have had an idea about the best way to play Rock-Paper-Scissors before working through this section, but how can we find solutions to other games, such as the one in Exercise 3.1.20? We don't want to just use a “guess and check” method. Especially since there are infinitely many possible mixed strategies to try! The rest of the chapter will develop mathematical methods for solving repeated games with no equilibrium point.