Written in a little bit of a hurry, but hopefully the ideas make sense . . . .

In 2003 and 2004 Pepsi ran a promotion of the summer called “Play for a Billion.” There’s not much on youtube about the contest, but one of the commercials did show up in a search:

It was a fun promotion that involved many Pepsi drinkers winning trips to Orlando (2003) and Los Angeles (2004) where they had a chance to win $1 billion on a game show that was broadcast on the WB network (2003) and on ABC (2004). As far as I know it was the last billion dollar promotion before this year’s bracket contest with Quicken loans. Tim Chartier interviewed me about those contests here:

http://blogs.siam.org/perfect-billion-dollar-madness/

In the Play for a Billion game show, the contestants picked 1,000 6-digit numbers from 000000 to 999999 and if any of them picked the winning number selected on the show they could have one the billion dollars. But there was one pretty interesting twist on the show that reminds me a little bit of the direction people have gone with Dan’s Money Duck problem. The conversation about that problem is happening here:

http://blog.mrmeyer.com/2014/confab-money-duck/#featuredcomments

Once the winning number in the Pepsi contest was selected, the 10 contestants with the “closest” numbers were brought on stage. “Closest” was defined as having the most digits correct (and then tie breakers that I don’t remember). These 10 contestants then played the following game:

There will be 10 rounds of the game. During any of the rounds you can ring a bell and receive the prize money offered during that round. The catch, though, is that by taking that prize money, you give up your number and have to leave the game. If no one rings in, the person who is the “farthest” away from the winning number will be revealed and that person is eliminated from the game with no prize. After 9 rounds there will be 1 person remaining. That person gets $1,000,000 automatically AND if any of the 10 numbers from the people who were on stage matched the winning number, that person also gets $1,000,000,000.

The prizes for the rounds were something like this (I don’t remember the exact details):

Round 1: $20,000

Round 2: $30,000

Round 3: $40,000

Round 4: $50,000

Round 5: $60,000

Round 6: $70,000

Round 7: $80,000

Round 8: $90,000

Round 9: $100,000

So, the relationship to the Money Duck problem is through the concept of expected value. In the first round of the game described above, each of the 10 people have a chance at winning a guaranteed $1,000,000 prize. Since the contestants don’t know the winning number, they have no reason to believe they are any more or less likely to win than anyone else, and hence the expected value for each contestant at the start is $100,000. Why would you give that up for $20,000?

Well . . . the answer is because the decision isn’t just abstract math / probability / expected value. I should point out that at least one of the commenters on Dan’s site did point out that running Dan’s activity with an economics teacher might be a good idea. I completely agree with that sentiment.

So, the first year the rounds played out like this (again, I forget the exact details, but this is close to what happened):

Round 1 – no one rings in and one person is eliminated.

Round 2 – no one rings in and one person is eliminated.

Round 3 – someone rings in and walks away with $40,000

Round 4 – someone rings in and walks away with $50,000

Round 5 – someone rings in and walks away with $60,000

Round 6 – no one rings in and someone is eliminated.

Round 7 – someone rings in and walks away with $80,000

Round 8 – someone rings in and walks away with $90,000

Round 9 – no one rings in. One person wins one million, and “2nd place” leaves with nothing.

Watching this live was one of the most amazing experiences I ever had. Rarely, if ever, do you see this kind of tension.

The decisions of the contestants came down to two ways of thinking about the problem they were facing. One way they thought about it was expected value, and if you thought about it that way you’d never ring in. The other way was more along the lines of – I unscrewed a bottle cap, won a trip to Florida, and now someone is offering me $50,000. Why wouldn’t I take it? Why not indeed. I find it hard to believe that the behavior of the contestants in this game was irrational – and I’ve debated it with a lot of folks who understand probability way better than I do (for example, one of the companies involved in the promotion is run by a multiple time world bridge champion).

However, I’m in the minority on the rational / irrational behavior point, and just about every article about the show talked about how silly the contestants were to take the smaller prizes.

So, what happened in year 2? No one rang in to take the prize money in any round. How very rational!

The irrational contestants who rang in early in the first year waked away with $320,000 in “small” prizes and the rational ones who never rang in the second year walked away with nothing.

In part because of my experience with these promotions, I do not like the idea of approaching these problems purely on the basis of expected value and probability. At minimum the ideas of economic utility come into play, but many other things do as well.

## Comments

Suppose you have a game where you have a choice between either

$499,999 automatically

or

a 50-50 chance at winning $1,000,000

you have an expected value in the second game of $500,000, but does the $1 really mean the second game is more rational?

What about when the first game is $x? What’s the lowest x would be so you prefer the second game?

Funny enough, in one of the conversations I had about the two Pepsi shows they guy I was talking to used essentially the same example. He told me that there was no circumstance in which he’d take the guaranteed money if it was less than half of the 50-50 prize amount.

My response -> suppose you have no money in your pocket and have to pay someone $1,000 right now or else something very bad is going to happen to you. You are then offered a 50/50 chance at $1 million or $1,000 on the spot. What now?

FIND THE VIDEO, MIKE!

I think part of the paradox (and this shows up in other philosophy/probability problems) is that the opportunity is a once-in-a-lifetime thing. Expected value makes more sense in an iterated game.

In gambling, there are times where the expected value goes to the player rather than the house, but the player has a limited bankroll so in practice it doesn’t work that way. (A good example is the always-double-your-bet-when-you-lose strategy.) The game show problem can be tackled the same way. One can think of $499,999 as $0 change, $0 as $-499,999 change, and $1,000,000 as $500,001 change. Because of the limited bankroll and single iteration problem, the player may not be able to afford the loss of $499,999.

## Trackbacks

One Trackback