Matt Melton does a statistical take on something that should make you slap your head and wonder why nobody else has done the same thing before (and apologies if you’re that someone and I’ve failed to find you):
… I decided to take a look at how crowd size affected home field advantage in 2010. I didn’t look at any particular telling stat (penalties, yards, touchdowns, etc.), I simply looked at whether or not the home team won, and then what the crowd size for that game was. I then, very unscientifically I might add, divided the crowd attendance into ranges of 10,000 and calculated the winning percentage for each range. The results are summarized in the table below.
Now before you start jumping up and down, Matt acknowledges there’s a certain chicken-and-egg aspect to this. The better programs tend to be bigger draws. So that you see winning percentages climb in almost-virtual lockstep as home crowd size increases may be attributed to either factor. (There’s also sample size as an issue, something that’s often a problem when you’re looking at a statistical base being generated by only 120 participants.)
But here’s where it gets interesting. Matt then filters his results through the point spread and finds that there’s correlation up to a point: “As the crowd size inched north of 50,000 or so, the added fans provided no real boost to the home team’s hopes of covering the spread.” And it’s his last chart that’s worth a look.
That’s only the data from one season, but if that’s a breakdown that plays out as a trend, he’s right. Appreciative gamblers everywhere ought to tip their caps in his direction.