Wednesday, September 26, 2012

The Reese's Rainbow: Validating a Child's Delusional Compulsions



I think it's only fair to say that everyone has a few weird compulsions that are fairly benign.  One of mine is the way I eat certain types of candy.  Today we're going to talk about Reese's Pieces.

Ever since I was a child - certainly as long as I can remember - I've eaten Reese's Pieces in the same way.  Pour out a handful, then from that handful eat them in sets of three comprised of one of each color.  One orange, one yellow, and one brown.  When one color runs out, do the same for pairs of the remaining two colors.  When reduced to one color eat those in pairs or just all at once, depending on how many are left.

Odd, to be certain.  Disruptive or detrimental?  Hardly.

The result, however, is that as long as I can remember I have been accidentally carrying out sampling studies on Reese's Candy.  Over time I realized that I'd been subconsciously internalizing some of the patterns.  In particular, I had become pretty confident that orange was always going to be the color left over at the end.  Brown and yellow seemed comparable, though my inclination was that yellow was the rarer piece, so to speak.

Well, it's a testable hypothesis.  So, let's test it.

I picked up a few bags of Reese's Pieces on sale a few days ago, and sat down to count through them.  These were the big bags, and for the record I guess the big bags have somewhere around 600 individual pieces in them.

The first bag had 565 pieces, 303 of them orange.  The second bag had a few more, with 629, and only 281 of them orange.  Both bags seemed to fit with my initial suspicion, though I was wary of making any claims given not only the variability of color but also the variability of bag size.

With that in mind I picked up a few more of the smaller boxes of Reese's Pieces and set to counting them.  They're quite a bit smaller, with only around a quarter of the pieces as the larger bags.  Each box again had orange showing up as the clear favorite.  The graph below highlights the relative proportions in each set (1 and 2 are the larger bags, 3-5 are boxes):


The trend seems to be there - orange is the most common color in each of the different samples.  A test of the proportions confirms that this is in fact significant, even with the small sample.  The mean proportion of orange pieces in packages of Reese's Pieces is significantly higher than the proportion of either yellow or brown.  

That wraps up the first part, and it does seem that what I suspected from childhood is true - orange seems to be over-represented in bags of Reese's Pieces.  This statistically significant effect is large enough and consistent enough that I'm confident in saying that with only 5 samples.  I'm betting that the orange machine at the Reese's plant is dialed in somewhere at or just below 50% - the mean would suggest 48.96%, close enough to 50% that a larger investigation might reveal that they picked an easy half as something to shoot for. 

What is more difficult is parsing out the difference between yellow and brown.  While the suspicion from my years of sampling (that yellow is rarer than brown) seems to be somewhat reflected in the trend so far, there simply aren't enough data points yet to be very confident in what appears to be at best a small effect.  There are even situations (like the first box) where yellow showed up at a higher percentage. 

This is where you come in, reader.  I already have a whole lot of Reese's Pieces queued up to be eaten slowly, hopefully over the next few months or more.  I don't need to keep adding to that pile.  What I'm asking is for all of you to go out and pick up a bag, or a box, or a larger bag, and do some counts.  Teach a statistics class?  Have them do it. 

Post the numbers in a comment and we can revisit this at some point and run the numbers on a larger sample.  I'll even sweeten the deal (har har) if you post numbers and mail you a (unopened) bag of Reese's Pieces for your trouble if I 1) know who you are, and 2) know where you live.  Consider it a buy one get one free.  Obviously, don't just post your address on here, but we can figure it out.  Terms and conditions apply, promotion may be canceled at any time without warning.        

If there's a message in here anywhere, perhaps its that science should be all around us.  Observe your world, form hypotheses, then test them.  Teach the lesson to kids - if it involves activities like counting then eating a bunch of candy they'll probably listen. 

Wednesday, September 19, 2012

Horse Racing and You: The Basics & How to Win Every Race (Bet on All the Horses)

Horse racing.  There's a lot of numbers and statistics involved, but some of the best advice is probably the simplest: don't do it unless you know more than everyone else in the betting pool.

I'm not looking to become a gambling statistician, but a friend requested a post on the topic of horse racing and I figured it would be a fun topic to take a look at.  I'm not planning on looking for anything that will make me rich, just taking a look at what some of the numbers can tell us.  I'm also betting (haha) that I'll only touch on the surface and basics today, look at one quick example, and maybe revisit the topic in another post some day.

One of the ways that you can classify gambling games is who you're betting against.  In games like slot machines or roulette, you're playing against the house.  The amount of money that can be won is only capped by the table limits.  If you're playing roulette and you bet $10,000 dollars on red 12 you win $360,000 from the house.  There doesn't have to be anyone else playing, and the amount of other money on the table doesn't change your payout.  The house has to carry enough money to cover all bets - for more on this see the plot of the Ocean's movies.  As Captain Janeway once said "Never bet against the house."

In games like horse racing, casino poker, and the lottery, all bets are pooled first so that the house can take a cut (or rake).  The remaining money is then what players have the opportunity to win.  You're probably most familiar with this from experience of news stories on big ticket lottery games.  A jackpot may be 200 million dollars one week - if no one wins and people continue buying tickets it may be 220 million the next.  The jackpot is increasing because the pooled money is increasing.  When a winner hits they might have the sole winning ticket, or they might have to split that jackpot with others who also share that winning ticket (or 'close' tickets, like 5 of 6 numbers).

This is a good way to get a conceptual grip on horse racing.  When you play the lottery or poker, the house doesn't determine what you can win.  When you buy a lottery ticket you're making a bet that your number will be drawn.  Unlike roulette, there is no fixed amount you will win per dollar bet in the lottery, and betting more on the same number doesn't always increase your payout - if you buy $10,000 worth of the same ticket you're not going to win 10,000 times as much money.  If you're the sole winner you're actually going to win the same amount (accounting for the fact that you also increased the size of the jackpot by buying more tickets).  If you're winner with others you will win more shares of the whole, but the total amount will still be capped.

Horse racing is incredibly similar to the lottery in this way.  You're not betting against the house, the house is simply acting as bookkeeper (and taking a cut of somewhere around 17% for that work).  You're betting against all the other people who are also betting.

Think of a horse race with 10 horses, and a 1-digit lottery pick-em ticket.  In each you have 10 choices for betting - 10 horses in the race and 10 single digit numbers in the lottery.

Now, this produces a weird lottery, as roughly 10% of those who enter will win.  If 100 people each buy $1 ticket randomly, we'd expect around 10 to buy any given ticket.  That means we'd also expect about 10 people to win their share of the $100, after the house cut (let's call it 17%, or $17).  There's now $83 to be divided between 10 people, so each person wins $8.30 from their $1 bet.

After all tickets were bought, and the house closed betting, they could issue payout tables of what share of the payout would be given to any outcome based on what tickets were bought and how many people were splitting the pool for each result.

That is horse racing.

Well, the difference is that in our lottery there's an equal chance of any number being drawn, and so people purchase their tickets without any sort of overall pattern.  In horse racing, however, some horses are known to be better than others, so more people will bet on them.  Imagine the above lottery example if news was leaked that the number 5 was the one that was going to be drawn.  Say that this caused almost everyone (99 people) to bet on number 5.  If 5 did in fact come up, 99 people would be dividing the post-take pool ($83), and earning about $0.84 - actually losing $0.16.

Say that one person likes voting for the underdog and bets on the number 8, and against predictions it is 8 that is drawn.  That person wins the whole post-take pool and walks away with $83 for their $1 bet.  

This is what you hope for when betting on horses - that you make a winning bet that no one else has made.  Usually this occurs because you bet on, well, a horse that shouldn't have won.

The other big difference is that horse racing has a whole assortment of bets that you can make.  What we've discussed so far are straight win bets where you have to pick the winner of the race.  In terms of simple bets there are also those where you pick a horse and win if it comes in first or second (place bets), or first second or third (show bets).  This increase in the allowed error of your prediction comes at a cost - a drastically reduced share of the winnings.  The rest of what I'll talk about today assumes that you only ever make WIN bets, and not place or show bets.

Beyond this, there are also what are known as exotic bets.  How would you feel learning calculus before having a good grip on algebra?  How about this: what's one of the main things that makes solid state drives better than traditional hard disc drives?  Less moving parts.  With less moving parts you have less parts that can go wrong.  Exotic bets in a horse race are full of moving parts.

Looking at this as a statistician I've already given my best advice: don't bet on horses unless you know something about the odds that others don't (like a secret injury, or secret horse-rocket surgery).  If you're going to bet, though, my advice would be to stick to simple bets for a while, and especially straight win bets, none of the place or show stuff.

That gets us to the stats for the week.  What I wanted to take a look at is actually a quote from the character Tom Haverford on the NBC show "Parks and Recreation":

"When I bet on horses, I never lose. Why? I bet on all the horses."
 - Episode 4.12, "Campaign Ad"

It's a good joke, and it made me laugh while watching the show, but it also made me think about if that strategy could possibly hold any truth.  In a game like roulette the math is pretty simple.  There are 37 or 38 positions on the wheel, and betting on any given space pays out 36:1.  If you make a bet on every space on the board you'll lose money every time.  It's mathematically impossible for you to make money in that way.  This is because the odds are set by the house to make sure that - on average - the house wins.  This house motivation isn't present in pooled bets - as long as bets are being made the house will always get their cut.

The odds that you see on a board at the track are the odds as they would work out if no more bets were taken.  This is why when looking at historic odds you'll find 'Morning Line Odds' as well as 'Actual Odds'.  Back when this all had to be done by hand the odds weren't calculated continuously, and the house only had a few chances to sit down and do all the math.  One of them was the night before the race, which then created the morning line odds, and are used as an approximation.  The other was after final bets, and produced the actual payout odds.

There are a lot of horse races every year, so to keep this as digestible as possible I'm only going to focus on one of the biggest - the Kentucky Derby.  While this also drastically increases my ability to find historic records, horse racing records of this sort don't seem to be archived nearly as well as NFL records.  I've spent some time trying to find historic odds and payouts, and unfortunately only found full data on all contenders back to 2007.  I was able to find actual odds for winning horses back to 1985.  I'd like a larger data set, but hopefully this gives us something to think about.

To reiterate, what I'm curious about is what happens if you simply bet on every horse in the race.  You're always going to win, but what does it take to make up for that fact that you're also holding a bunch of losing tickets?

Somewhere around 20 horses take part in the Kentucky Derby, so you're going to have to somewhere around 20 tickets.  What will those 20 tickets get you?  Well, here's a graph of the odds you'd have faced in the last 6 Kentucky Derbys:


Some years are clearly better than others, and in three of the six years the favorite or near favorite won the race.  These years (2007, 2008, and 2010) are bad for this strategy, as what you want is something like 2009 where an unexpected horse wins.  We still have to factor in what you paid on losing tickets, though, which we can do by shifting the values of the y-axis:


Same lines, but now we can clearly see that the worst case for this system (where the favorite wins) always loses money, and the best case always makes money (though it's admittedly rare).  Only two of the six years would have turned an actual profit, and one of them would have been pretty nice ($30 profit for every block of 20 $1 tickets).

Now, this is only if you can find a place to get $1 tickets.  From what I've been able to figure out the normal minimum bet is $2 for a WPS (win/place/show) ticket.  This is where it starts to get costly.  I was able to find odds to the dollar back to 2007 (hence the above graphs), but historical data is much easier to find on Win payout for a $2 ticket, so for full analysis I'll be assuming you're buying $2 tickets on all horses. 

I was able to piece together the Win ticket payout on a $2 bet for the winning horse all the way back to 1985.  Here's what the second graph looks like if we extend the yellow line back to 1985 under those conditions:
In a majority of these years (1985, 87, 88, 89, 90, 91, 93, 94, 96, 97, 98, 2000, 01, 03, 04, 06, 07, 08, 10, & 12) you'd take a bit of a hit due to a favorite or near favorite winning.  In 1992 you'd basically break even.  In some other years (1986, 95, 2002, and 11) you'd win a small amount, less than $20 profit for every set of tickets you bought (which is roughly $25-40 for the set).  That means that these 6 years would net you less than a 2:1 return on your investment.

Now, in one last set of years (1999, 2005, and 09) you'd make back more than $20 profit for every set of tickets, giving you better than a 1.5:1 return on your investment.  In the best years (2005 and 2009), you'd make over $60 profit for every $40 invested, or still just a bit better than a 2.5:1 return.

That's a little less than impressive - remember that in roulette we were talking about 36:1 returns.  My biggest takeaway from this graph seems to be that the cumulative absolute value of the peaks doesn't seem to make up for the cumulative absolute value of the valleys.  Well, we can graph that too:


So, that's hardly surprising.  In most gambling the odds are clearly stacked against you, and the best strategy is taking advantage of random noise in data trends to win some money and quickly get out of the game.  The longer you play roulette the more money you are guaranteed to lose.  The same appears to hold here.  You're looking to get in on this curve on an upswing - like 1995, 1999, 2005, or 2011, and then quit while you're ahead.

If you'd been making a $2 bet on every horse in every Kentucky Derby since 1985, at this point you'd have lost just around $175.

That might seem like the punchline, and it sort of is.  But there's one more thing that has come to mind while doing this. 

To bet $2 on every horse in the Kentucky Derby from 1985 to 2012 (28 races), you'd need $982 dollars.  Now, that might seem like a lot, but that's the most amount of money you could possibly lose if you never won a single race.  We've already figured out that you're never going to lose a race, you're simply going to win less than what's needed to cover all your bets.

Without numbers on the full actual odds for Derbys before 2007 I can't run the math on what you're actually capable of losing.  All we know at this point is that it's less than $982 dollars.  If a far and away favorite - the worst case in my data is a 2:1 favorite in 2008 - won every year for the last 28 years (at which point they'd probably cancel or rig the Derby), you'd still only have lost $870 dollars ($982 dollars bet plus $112 recovered on winning bets).

For the absolute worst case risk of less than $900 dollars ($31.07 a year), you've just won 28 straight Kentucky Derbys.  You've just held up the winning ticket for Mine That Bird and Giacomo.  You're basically biding time waiting for huge upsets that could drive crazy profits, knowing full well that they might simply never hit.  Every time you hear someone saying "I wish I could go back to the beginning of the season, put some money on the Cubbies!", you've just cashed in, and you should get out (but you won't).  You win every upset because you've bet on every upset.  There's no prediction, there's no guesswork, simply taking advantage of a betting structure where you're not betting against the house, but betting into a system with an ambivalent house and an interesting consumer driven odds structure.

Are there safer bets?  Certainly!  I've already told you that you shouldn't gamble.  I've shown you that you're going to lose money in the long run.  But how about putting your money in the bank in 1985?  Compound interest over the course of 28 years should really grow that $870 dollars, right?  

Well, it would, but there's something to be said about the excitement of it all.  There's certainly something to be said about holding the winning ticket to every Kentucky Derby for 28 years.  Maybe it's just a weird 'impress your friends' trick, but there's certainly worse places to drop nine hundred dollars (and again, that's worst case - if you'd been doing it from 1985-present you've actually only lost $175 or $6.25 a year). 

I wasn't expecting the Tom Haverford betting strategy to hold any water whatsoever, but it curiously doesn't really seem to be as bad as I might have first thought.  You're not going to get rich, but you're not going to get poor either.  If you do it long enough you're at the very least going to have a cool story to tell your friends at the low cost of only $6.25 a year.  

You've certainly heard the quote (whose original source I can't determine) "Gambling is a tax on those who are bad at math".  Perhaps this method fits more into the Pasteur saying "Chance favors the prepared mind."

Maybe in the end, Tom Haverford actually said it best:

"When I bet on horses, I never lose. Why? I bet on all the horses."

I'll let you know how it goes next May.  

Wednesday, September 12, 2012

NFL 2012: Point-pocolypse?

In the last few days I've been reading a lot of analysis of this year's first week of NFL football, and have noticed a lot of people talking about the point production in a historic sense.  Most notably - and certainly most repeated - is Michael Signora's tweet pointing out that:

"791 points scored in Week 1 are most ever on Kickoff Weekend & 2nd-most for any week in NFL history (837, Week 12, 2008)" [recovered from Twitter; https://twitter.com/NFLfootballinfo/status/245488836669501441]

Certainly interesting, and those numbers are correct.  It is a fact that there were 791 points scored this week, and it is a fact that 791 is the greatest number of points scored in an opening week.  While these are undisputable facts, I come away from them with two major questions:

1) A league with more teams means more games per week and (all other things equal) more points per week.  The number of teams has changed over time, and the NFL is about as large as it has ever been.  Weeks later in the season also have teams on bye week, which means fewer games played per week.  How would things look if we controlled for the number of teams in the league and number of games played by instead looking at the average points per game?

2)  Is this week's increased scoring a high - but still normal - data point, attributable to normal fluctuations in score, or is this week's increased scoring far enough from the norm that we can justifiably speculate on differences in the league that might be driving that higher scoring?

Thinking about these things drove me to Google to try to find some data on historic scoring, and I luckily found a site where I could pull down some data.  That site is Pro Football Reference:

http://www.pro-football-reference.com/

And it is AMAZING.  It has pro football data back to the early days of the sport, and I could spend hours going through the site looking through all sorts of statistics.  By the way, did you know that on December 8th, 1940, the Chicago Bears beat the Washington Redskins in that year's final championship with a score of 73-0, the largest blowout AND largest shutout in pro football history?  Today I'm interested in scoring.

The Signora statistic above mentions that this week's 791 point total is the highest first week score in the history of the NFL.  While the NFL does stretch back farther I'm going to look at things back to the start of the modern NFL (after the merger with the AFL in 1970).  For now I also only want to look at regular season, so that we can get all the teams in there.

The first thing we can look at is a trend of average point production per game, by year.  Keeping in mind that we only have one week for 2012 that gives us a graph that looks like this:


Hmm.  Okay, that looks pretty steady.  There are some dips here and there, notably one in the late 70s and one in the early 90s, but it looks like the average production for most games is right around 40 points.  This week's average was just shy of 50 points (49.44).  

Remember, though, that our data point for 2012 is just one week of games, and each of the other data points contains an entire year.  We can use those years that contain a number of weeks (all but 2012) to calculate confidence intervals for those years, allowing us to see if this past week is a week that could reasonably occur in any given year.  Putting those 95% confidence intervals on our above graph gives us another graph:


Okay, this is starting to get interesting.  My initial inclination was that this past week wouldn't stand this far outside the norm, but you can see that this past week's average total score of 49.44 rests outside the 95% confidence intervals of every year since the start of the modern NFL.  This would seem to give a bit more solid footing to those speculating on what might be driving this above average scoring. 

Still, it's also the case that we can make this more specific than the above graphs.  We know that this past week was week one of the season.  The above charts compare last week to the average of all weeks in every given year, which is increasing the confidence in the annual mean (and thus reducing the size of the confidence intervals).  What does last week look like when compared annually only with other week one games?  Well, it looks like this:




The confidence intervals are now quite a bit larger because we are now averaging across individual games instead of across individual weeks.  Regardless, now we're getting somewhere.  The big spike there about a decade ago actually is a decade ago - in 2002 the average game score in week one was 49.25 (remember, this year was 49.44).  The total number of points scored that week was 788 - only 3 points away from the scores of this week.  In all there are a number of years that show up on that higher end, fairly close to this last week.  

This raises a flag for me: week one scores in isolation look much more comparable to this past week's scores.  Sure, this past week is on the higher end of those scores, but in general the scores are much more clustered together.  A simple boxplot reveals that not only is the past week not an outlier in terms of weekly score, but also that only one year has been - and it's on the low end:



That point at the bottom is 1977, when the average total game score for week one was only 27.43 points.  Five of the fourteen games were shutouts, and of the remaining, two losing teams made only a single field goal.  That means the lowest scoring quarter of the teams that week (half of the teams in half of the games)  put up a combined score of 6 points.  Now THAT is an out of the ordinary week.  This past week?

It looks like the bottom line on this one is that while this week is out of the ordinary in terms of all weeks in any given season, it's not that out of the ordinary when compared just to other first week scores.  What would be really telling is if this trend continued.  If we get the same kind of numbers for week two we might be looking at a trend.  If we don't, we're likely to just be looking at the high end of normality.  Hopefully I'll revisit it sometime later in the season.

There's something else that I hadn't thought about which occurred to me after seeing how the data was broken down on Pro Football Reference.  That is: how does the overall trend of average scores per game by year hold up if you look at the points scored by the winning teams and losing teams separately?  It could be that this week had a much larger number of blowouts (higher scores by winning teams), or a lot more close games (higher scores by losing teams), or somewhere in the middle (higher scores by both).

We can skip right to the graph complete with 95% confidence intervals:



Maybe I'll talk about that one a little later in the season as well.  Consider the above a teaser. 

 


Wednesday, September 5, 2012

Games of The Price is Right: Plinko

Just yesterday, The Price is Right celebrated its 40th anniversary.  It's certainly among the most well known game shows of all time, and its placement during daytime TV makes it a wonderful time sink for children.  Through this, I suspect that The Price is Right is many children's first exposure to some fairly deep probability and statistics.

I've been watching The Price is Right since very early in my childhood, so it's always very painful to see people on it who aren't familiar with the strategies of different games, or even just of some of the basic probabilities underlying those games.  In short, The Price is Right is full of examples of and opportunities for the use of practical statistics.

In thinking of where to start with something like this I couldn't help but pass up perhaps the most well known game on The Price is Right, Plinko.  If you're not familiar, Plinko is a game where contestants can earn a number of little round disks (think hockey pucks but a little larger diameter, maybe a touch thinner) which they then drop down a pegboard with different valued gates at the bottom.  It looks like this:


And can be graphically represented as this:




5

4

3

2

1

2

3

4

5






















o

o

o

o

o

o

o

o



o

o

o

o

o

o

o

o

o



o

o

o

o

o

o

o

o



o

o

o

o

o

o

o

o

o



o

o

o

o

o

o

o

o



o

o

o

o

o

o

o

o

o



o

o

o

o

o

o

o

o



o

o

o

o

o

o

o

o

o



o

o

o

o

o

o

o

o



o

o

o

o

o

o

o

o

o



o

o

o

o

o

o

o

o



o

o

o

o

o

o

o

o

o



o

o

o

o

o

o

o

o






















100

500

1K

0

10K

0

1K

500

100





















The green areas up top are the the different gates that contestants can drop the puck through at the beginning of the game.  I've numbered them starting at the center, and if we assume that the board doesn't have any problems the gates should produce symmetric results about the center.

The idea of the pegboard is pretty simple, and we can make some assumptions about the way the puck travels down it that should help approximate what's happening.

Now, if we knew a lot more about the board we could get a lot more accurate.  We'd need to know the coefficient of friction between the puck and board, the slope of the board, the elasticity of the collision between the puck and pegs, the relative size of the puck and the pegs, and the starting velocity of the puck (the idea is to drop it from stationary, but a contestant could easily impart some force when letting it go).

The big assumption that I'm going to make here is that when the puck hits a peg it has a 50/50 chance of falling to either the left or the right.  For any single collision that is certainly at least a little off, but over the course of the path through the board I suspect those random differences should more or less cancel out.  I think this is also the spirit of what a pegboard is meant to do.

With this information it's pretty easy to calculate the cascading odds of the puck being in any given position on the board based on the gate in which it starts, and the probability that it will fall in any of the prize gates given that same starting gate.  These odds make for some good graphs:





























Keep note that the y-axis does change on these graphs, and that the graphs also only reflect the right side of the board.  The graphs for the left side would be symmetric but skewed in the other direction.

The first thing that should be apparent is the normal-like nature of the first graph, and the increasingly skewed nature of the following graphs as the puck starts out closer and closer to the sidewall.  Something that makes sense but I hadn't thought about is the fact that in later graphs (gate 3 onward) the $100 prize is always lower than the prize reflected across the mode.

We can use the gate 5 graph as an example - as the graph starts to push up against the right sidewall the odds don't pile up at the wall but instead spread out to the left.  This is partly because any time the puck gets to the wall it has no option except kicking back to the left, whereas a puck in the center of the board can fall either right or left.

What I've wondered for a long time watching Plinko is which gate has the largest average payoff.  I've always suspected it to the be the dead center (gate 1), but watching price is right there are a lot of people who place the puck in plenty of different places.  They even look to the audience for guidance, as if the entire audience is likely to reach consensus on the issue.

Now that we have the probabilities for each of the prize values for each of the gates it is now a simple case of calculating the expected values for each of the gates.  This is simply the sum of the products of each probability and each prize value, and will tell us the average amount of prize money that can be expected for a large number of drops from the same gate.

On The Price is Right you start with one drop, but have the opportunity to win four more, giving you the capability of five drops and a total potential prize of $50,000.  Not shockingly, no one has ever come even close to this $50,000 value.  Here's the expected values for each of the gates given the number of pucks the contestant is able to acquire:


Chances 1 2 3 4 5





Gate 1 2557.91 5115.82 7673.73 10231.64 12789.55
Gate 2 2265.92 4531.84 6797.76 9063.68 11329.6
Gate 3 1605.86 3211.72 4817.58 6423.44 8029.3
Gate 4 1009.08 2018.16 3027.24 4036.32 5045.4
Gate 5 780.37 1560.74 2341.11 3121.48 3901.85

These expected values confirm my long-held suspicion: the statistical advantage goes to the contestant who drops their pucks in the center gate.  For contestants able to get all five pucks and drop all five of them in the center gate, prize money should average $12,789.55.  Interestingly, using the gates just off of center (gate 2) doesn't lose you as much money as I would have thought - $291.99 for a single puck and $1459.95 for all five.  It also doesn't lose as much money as moving to the next gate, as can be seen if we examine the difference between expected values of adjacent gates (think of it as the first derivative):




If you've made the mistake of deciding to move away from the center spot, the non-cumulative magnitude of the mistake is even worse if you move from gate 2 to gate 3.  There's not much more to be gained dwelling on the magnitudes of these mistakes, and the result should be pretty clear:

For the biggest payoff in Plinko, use the center-most gate.