High-Stakes Failures of Backward Induction
In our paper “High-Stakes Failures of Backward Induction”, we examine high-stakes strategic choice using more than 40 (!) years of data from the American TV game show The Price Is Right.
Many economic interactions are sequential in nature. Such situations can be modeled as sequential games of perfect information, for which the subgame perfect Nash equilibria (SPNE) can be found through backward induction.
Tests of equilibrium play typically rely on lab experiments in which all factors are perfectly controlled. Experimental work generally finds that people deviate from the optimal strategies, casting doubt on the descriptive validity of backward induction as a solution concept.
The generalizability of experimental findings, however, is subject to debate. Critics argue that it is not surprising that experimental subjects frequently fail to play optimal strategies because they are not well incentivized and have little experience with the task.
In the present paper, we examine the optimality of strategic decisions in the Showcase Showdown (SCSD), a finite sequential game of perfect information that is played twice in every episode of the long-running American television game show The Price Is Right.
In this game, three contestants take turns to spin a wheel that contains all multiples of 5 in the range 5–100. The aim is to get closest to 100 without going over. The contestant who does so wins the game and proceeds to the final, where they can win a set of prizes worth tens of thousands of dollars. In addition, they win one or two monetary bonus prizes if their score is exactly 100.
So where is the strategic component? Immediately after spinning the wheel once, a contestant must decide whether to spin the wheel a second time. Their score is the outcome of their first spin if they spin only once, and the sum of the two spin outcomes if they spin twice.
To make the optimal choice, the contestants thus need to weigh the possibility of obtaining a more competitive score and a shot at the bonus prizes against the chance of self-elimination.
Using backward induction, we derive the unique subgame perfect equilibrium strategies, which are given below (for brevity, I leave out the relatively rare case of ties).
To investigate whether people play according to these optimal strategies, we obtain a large sample of 10,071 renditions of the SCSD from 5,235 episodes that aired between 1979 and 2021 (source: tpireguide.com ).
For contestants 2 and 3, decisions are trivial when their first-spin outcome is lower than the best preceding score (spin again!). For Contestant 3, who spins last, the decision is also trivial when their first-spin outcome is higher than the best preceding score (stop!).
We, therefore, omit these decisions from our empirical analysis and exclusively focus on the 10,071 decisions of Contestant 1 and the remaining 4,488 decisions of Contestant 2.
So how do they do?
We find that Contestants 1 and 2 frequently make suboptimal decisions. Contestant 1 almost exclusively errs by underpinning: they stop when it is optimal to spin. Contestant 2’s mistakes, by contrast, are more symmetric.
How can we account for these deviations? We first examine whether contestants depart from the equilibrium strategy because they make random errors and expect others to do so as well. To test this explanation, we estimate an agent quantal response equilibrium model.
When we compare the actual spinning rates (black) with the model’s predictions (grey), we find that the decisions of Contestant 2 are largely consistent with the model. However, the model fails to capture most of the underspinning of Contestant 1.
Next, we consider the possible role of omission bias, the tendency to favor harmful inactions over harmful actions. Tenorio and Cason propose that underspinning in the SCSD can be explained by a preference for elimination after not spinning over elimination after spinning.
We find that allowing for omission bias in the AQRE model improves the fit for Contestant 1, but at the same time introduces large systematic prediction errors for Contestant 2. Hence, omission bias fails to adequately explain the behavior that we observe.
Another possible explanation is that some contestants do not properly backward induct, and instead adopt a simplified representation of the game. Prior research suggests that people have limited foresight and look only one or a few steps ahead in multi-stage strategic situations.
Following this literature, we adjust our baseline AQRE model to allow for the possibility that a fraction of contestants myopically behave as if the next stage of the game is also the last.
Our limited foresight model accurately describes the observed behavior of contestants. According to the estimate, approximately 38 percent of the contestants simplify the game by looking only one step ahead.
Limited foresight thus better explains contestants’ spinning choices than omission bias. Furthermore, when we estimate a model that includes both these elements, we find that it does not significantly outperform the model that only includes limited foresight.
The overall conclusion, therefore, is that the high-stakes failures of backward induction in this game can be well explained by a combination of random evaluation errors and limited foresight, and that the role of omission bias is negligible.
Our results are striking considering the show’s long history. A natural question is whether contestants’ behavior converges towards the SPNE over time.
When we subdivide our sample into four periods, we find that the estimated proportion of contestants with limited foresight decreases from 55 percent in the first period to 24 percent in the last. Despite this strong learning effect, many contestants remain unable to properly backward induct after several decades of The Price Is Right.
d The paper also contains several robustness analyses, which include tests of various alternative explanations (such as risk aversion and overconfidence). The results are highly robust.
The paper is joint work with Bouke Klein Teeselink, Martijn van den Assem, and Jason Dana, and is available on SSRN .