Looking At Team Performance Quarter-By-Quarter

AFL Football - as the cliche goes - is a game of four quarters. The benefit of this arrangement is that AFL scores provide twice as much information about the ebb and flow of each contest as the scores of any other form of football in this country. With the quarter-by-quarter information alone we can perform some interesting analyses for every team.
Read More

Modelling AFL Team Scoring

Today's blog is the first in a series that will look at statistically modelling the scoring behaviour of teams in the AFL. If you're profoundly reductionist about it, you can think about a team's footy score as being the product of the number of scoring shots it creates and the proportion of those scoring shots that it converts into goals.
Read More

Goalkicking Accuracy Across The Seasons

Last weekend's goal-kicking was strikingly poor, as I commented in the previous blog, and this led me to wonder about the trends in kicking accuracy across football history. Just about every sport I can think of has seen significant improvements in the techniques of those playing and this has generally led to improved performance. If that applies to football then we could reasonably expect to see higher levels of accuracy across time.
Read More

Scoring Shots: Not Just Another Statistic

For a while now I've harboured a suspicion that teams that trail at a quarter's end but that have had more scoring shots than their opponent have a better chance of winning than teams that trail by a similar amount but that have had fewer scoring shots than their opponent. Suspicions that are amenable to trial by data have a Constitutional right to their day in court, so let me take you through the evidence.
Read More

Why April's Conceivably Better Than March

It's an unlikely scenario I know, but if the players on the AFL Seniors lists ever got to talking about shared birthdays I'd wager they'd find themselves perplexed. As chestnuts go, the Birthday problem is about as hoary as they come. It's about the probability that two randomly selected people share a birthday and its longevity is due to the amazement most people express on discovering that you need just 23 randomly selected people to make it more likely than not that two or more of them will share a birthday. I'll venture that few if any of the 634 players on the current Seniors lists know that but, even if any of them did, they'd probably still be startled by what I'll call the AFL Birthday phenomenon.
Read More

The Other AFL Draft

Drafting is a tactic well-known to cyclists and motor-racers and confers an advantage on the drafter by reducing the effort that he or she (or his or her vehicle) needs to expend in order to move. There's a similar concept in round-robin sports where it's called the carry-over effect, which relates to the effect on a team's performance in a particular game that's due to the team its current opponent played in the previous round. Often, for example, it's considered advantageous to play teams the week after they've faced a difficult opponent, the rationale being that they'll have been 'softened up', demoralised and generally made to feel blah by the previous match.
Read More

Using a Ladder to See the Future

The main role of the competition ladder is to provide a summary of the past. In this blog we'll be assessing what they can tell us about the future. Specifically, we'll be looking at what can be inferred about the make up of the finals by reviewing the competition ladder at different points of the season.

I'll be restricting my analysis to the seasons 1997-2009 (which sounds a bit like a special category for Einstein Factor, I know) as these seasons all had a final 8, twenty-two rounds and were contested by the same 16 teams - not that this last feature is particularly important.

Let's start by asking the question: for each season and on average how many of the teams in the top 8 at a given point in the season go on to play in the finals?

2010 - In Top 8.png

The first row of the table shows how many of the teams that were in the top 8 after the 1st round - that is, of the teams that won their first match of the season - went on to play in September. A chance result would be 4, and in 7 of the 13 seasons the actual number was higher than this. On average, just under 4.5 of the teams that were in the top 8 after 1 round went on to play in the finals.

This average number of teams from the current Top 8 making the final Top 8 grows steadily as we move through the rounds of the first half of the season, crossing 5 after Round 2, and 6 after Round 7. In other words, historically, three-quarters of the finalists have been determined after less than one-third of the season. The 7th team to play in the finals is generally not determined until Round 15, and even after 20 rounds there have still been changes in the finalists in 5 of the 13 seasons.

Last year is notable for the fact that the composition of the final 8 was revealed - not that we knew - at the end of Round 12 and this roster of teams changed only briefly, for Rounds 18 and 19, before solidifying for the rest of the season.

Next we ask a different question: if your team's in ladder position X after Y rounds where, on average, can you expect it to finish.

2010 - Ave Finish.png

Regression to the mean is on abundant display in this table with teams in higher ladder positions tending to fall and those in lower positions tending to rise. That aside, one of the interesting features about this table for me is the extent to which teams in 1st at any given point do so much better than teams in 2nd at the same point. After Round 4, for example, the difference is 2.6 ladder positions.

Another phenomenon that caught my eye was the tendency for teams in 8th position to climb the ladder while those in 9th tend to fall, contrary to the overall tendency for regression to the mean already noted.

One final feature that I'll point out is what I'll call the Discouragement Effect (but might, more cynically and possibly accurately, have called it the Priority Pick Effect), which seems to afflict teams that are in last place after Round 5. On average, these teams climb only 2 places during the remainder of the season.

Averages, of course, can be misleading, so rather than looking at the average finishing ladder position, let's look at the proportion of times that a team in ladder position X after Y rounds goes on to make the final 8.

2010 - Percent Finish in 8.png

One immediately striking result from this table is the fact that the team that led the competition after 1 round - which will be the team that won with the largest ratio of points for to points against - went on to make the finals in 12 of the 13 seasons.

You can use this table to determine when a team is a lock or is no chance to make the final 8. For example, no team has made the final 8 from last place at the end of Round 5. Also, two teams as lowly ranked as 12th after 13 rounds have gone on to play in the finals, and one team that was ranked 12th after 17 rounds still made the September cut.

If your team is in 1st or 2nd place after 10 rounds you have history on your side for them making the top 8 and if they're higher than 4th after 16 rounds you can sport a similarly warm inner glow.

Lastly, if your aspirations for your team are for a top 4 finish here's the same table but with the percentages in terms of making the Top 4 not the Top 8.

2010 - Percent Finish in 4.png

Perhaps the most interesting fact to extract from this table is how unstable the Top 4 is. For example, even as late as the end of Round 21 only 62% of the teams in 4th spot have finished in the Top 4. In 2 of the 13 seasons a Top 4 spot has been grabbed by a team in 6th or 7th at the end of the penultimate round.

Testing the HELP Model

It had been rankling me that I'd not come up with a way to validate any of the LAMP, HAMP or HELP models that I chronicled the development of in earlier blogs.

In retrospect, what I probably should have done is build the models using only the data for seasons 2006 to 2008 and then test the resulting models on 2009 data but you can't unscramble an egg and, statistically speaking, my hen's albumen and vitellus are well and truly curdled.

Then I realised that there is though another way to test the models - well, for now at least, to test the HELP model.

Any testing needs to address the major criticism that could be levelled at the HELP model, which is a criticism stemming from its provenance. The final HELP model is the one that, amongst the many thousands of HELP-like models that my computer script briefly considered (and, as we'll see later, of the thousands of billions that it might have considered), was able to be made to best fit the line betting data for 2008 and 2009, projecting one round into the future using any or all of 47 floating window models available to it.

From an evolutionary viewpoint the HELP model represents an organism astonishingly well-adapted to the environment in which its genetic blueprint was forged, but whether HELP will be the dinosaur or the horseshoe crab equivalent in the football modelling universe is very much an open question.

Test Details

With the possible criticism I've just described in mind, what I've done to test the HELP model is to estimate the fit that could be achieved with the modelling technique used to find HELP had the line betting result history been different but similar. Specifically,what I've done is taken the actual timeseries of line betting results, randomised them, run the same script that I used to create HELP and then calculated how well the best model the script can find fits the alternative timeseries of line betting outcomes.

Let me explain what I mean by randomising the timeseries of line betting results by using a shortened example. If, say, the real, original timeseries of line betting results were (Home Team Wins, Home Team Loses, Home Team Loses, Home Team Wins, Home Team Wins) then, for one of my simulations, I might have used the sequence (Home Team Wins, Home Team Wins, Home Team Wins, Home Team Loses, Home Team Loses), which is the same set of results but in a different order.

From a statistical point of view, it's important that the randomised sequences used have the same proportions of "Home Team Wins" and "Home Team Loses" as the original, real series, because part of the predictive power of the HELP model might come from its exploiting the imbalance between these two proportions. To give a simple example, if I fitted a model to the roll of a die and its job was to predict "Roll is a 6" or "Roll is not a 6", a model that predicted "Roll is not a 6" every time would achieve an 83% hit rate solely from picking up on the imbalance in the data to be modelled. The next thing you know, someone introduces a Dungeons & Dragons 20-sided dice and your previously impressive Die Prediction Engine self-immolates before your eyes.

Having created the new line betting results history in the manner described above, the simulation script proceeds by creating the set of floating window models - that is, the models that look only at the most recent X rounds of line betting and home team bookie price data for X ranging from 6 to 52 - then selects a subset of these models and determines the week-to-week weighting of these models that best fits the most recent 26 rounds of data. This optimal linear combination of floating window models is then used to predict the results for the following round. You might recall that this is exactly the method used to create the HELP model in the first place.

The model that is built using the currently best-fitting linear combination of floating window models is, after a fixed number of models have been considered, declared the winner and the percentage of games that it correctly predicts is recorded. The search then recommences with a different set of line betting outcomes, again generated using the method described earlier and another, new winning model is found for this line betting history and its performance noted.

In essence, the models constructed in this way tell us to what extent the technique I used to create HELP can be made to fit a number of arbitrary sequences of line betting results, each of which has the same proportion of Home Team wins and losses as the original, real sequence. The better the average fit that I can achieve to such an arbitrary sequence, the less confidence I can have that the HELP model has actually modelled something inherent in the real line betting results for seasons 2008 and 2009 and the more I should be concerned that all I've got is a chameleonic modelling technique capable of creating a model flexible enough to match any arbitrary set of results - which would be a bad thing.

Preliminary Test Results

You'll recall that there are 47 floating window models that can be included in the final model, one floating window model that uses only the past 6 weeks, another that uses the past 7 weeks, and so on up to one that uses the past 52 weeks. If you do the combinatorial maths you'll discover that there are almost 141,000 billion different models that can be constructed using one or more of the floating window models.

The script I've written evaluates one candidate model about every 1.5 seconds so it would take about 6.7 billion years for it to evaluate them all. That would allow us to get a few runs in before the touted heat death of the universe, but it is at the upper end of most people's tolerance for waiting. Now, undoubtedly, my code could do with some optimisation, but unless I can find a way to make it run about 13 billion times faster it'll be quicker to revert to the original idea of letting the current season serve as the test of HELP rather than use the test protocol I've proposed here. Consequently I'm forced to accept that any conclusions I come to about whether of not the performance of the HELP model's is all down to chance are of necessity only indicative.

That said there is one statistical modelling 'trick' I can use to improve the potential for my script to find "good" models and that is to bias the combinations of floating window models that my script considers towards those combinations that are similar to those that have already demonstrated above-average performance during the simulation so far. So, for example, if a model using the 7-round, 9-round and 12-round floating window models look promising, the script might next look at a model using the 7-round, 9-round and 40-round floating window models (ie change just one of the underlying models) or it might look at a model using the 7-round, 9-round, 12-round and 45-round floating window models (ie add another floating model). This is a very rudimentary version of what the trade calls a Genetic Algorithm and it is exactly the same approach I used in finding the final HELP model.

You might recall that HELP achieved an accuracy of 60.8% across seasons 2008 and 2009, so the statistic that I've had the script track is how often the winning model created for a given set of line betting outcomes performs as well as or better than the HELP model.

From the testing I've done so far my best estimate of the likelihood that the HELP model's performance can be adequately explained by chance alone is between 5 and 35%. That's maddeningly wide and spans the interval from "statistically significant" to "wholly unconvincing" so I'll continue to run tests over the next few weeks as I'm able to tie up my computer doing so.

Another Day, Another Model

In the previous blog I developed models for predicting victory margins and found that the selection of a 'best' model depended on the criterion used to measure performance.

This blog I'll review the models that we developed and then describe how I created another model, this one designed to predict line betting winners.

The Low Average Margin Predictor

The model that produced the lowest mean absolute prediction error MAPE was constructed by combining the predictions of two other models. One of the constituent models - which I collectively called floating window models - looked only at the victory margins and bookie's home team prices for the last 22 rounds, and the other constituent model looked at the same data but only for the most recent 35 rounds.

On their own neither of these two models produce especially small MAPEs, but optimally combined they produce an overall model with a 28.999 MAPE across seasons 2008 and 2009 (I know that the three decimal places is far more precision than is warranted, but any rounding's going to nudge it up to 29 which just doesn't have the same ability to impress. I consider it my nod to the retailing industry, which persists in believing that price proximity is not perceived linearly and so, for example, that a computer priced at $999 will be thought meaningfully cheaper than one priced at $1,000).

Those optimal weightings were created in the overall model by calculating the linear combination of the underlying models that would have performed best over the most recent 26 weeks of the competition, and then using those weights for the current week's predictions. These weights will change from week to week as one model or the other tends to perform better at predicting victory margins; that is what gives this model its predictive chops.

This low MAPE model henceforth I shall call the Low Average Margin Predictor (or LAMP, for brevity).

The Half Amazing Margin Predictor

Another model we considered produced margin predictions with a very low median absolute prediction error. It was similar to the LAMP but used four rather than two underlying models: the 19-, 36-, 39- and 52-round floating window models.

It boasted a 22.54 point median absolute prediction error over seasons 2008 and 2009, and its predictions have been within 4 goals of the actual victory margin in a tick over 52% of games. What destroys its mean absolute prediction error is its tendency to produce victory margin predictions that are about as close to the actual result as calcium carbonate is to coagulated milk curd. About once every two-and-a-half rounds one of its predictions will prove to be 12 goals or more distant from the actual game result.

Still, its median absolute prediction error is truly remarkable, which in essence means that its predictions are amazing about half the time, so I shall name it the Half Amazing Margin Predictor (or HAMP, for brevity).

In their own highly specialised ways, LAMP and HAMP are impressive but, like left-handed chess players, their particular specialities don't appear to provide them with any exploitable advantage. To be fair, TAB Sportsbet does field markets on victory margins and it might eventually prove that LAMP or HAMP can be used to make money on these markets, but I don't have the historical data to test this now. I do, however, have line market data that enables me to assess LAMP's and HAMP's ability to make money on this market, and they exhibit no such ability. Being good at predicting margins is different from being good at predicting handicap-adjusted margins.

Nonetheless, I'll be publishing LAMP's and HAMP's margin predictions this season.

HELP, I Need Another Model

Well if we want a model that predicts line market winners we really should build a dedicated model for this, and that's what I'll describe next.

The type of model that we'll build is called a binary logit. These can be used to fit a model to any phenomenon that is binary - that is, two-valued - in nature. You could, for example, fit one to model the characteristics of people who do or don't respond to a marketing campaign. In that case, the binary variable is campaign response. You could also, as I'll do here, fit a binary logit to model the relationship between home team price and whether or not the home team wins on line betting.

Fitting and interpreting such models is a bit more complicated than fitting and interpreting models fitted using the ordinary least squares method, which we covered in the previous blog. For this reason I'll not go into the details of the modelling here. Conceptually though all we're doing is fitting an equation that relates the Home team's head-to-head price with its probability of winning on line betting.

For this modelling exercise I have again created 47 floating window models of the sort I've just described, one model that uses price and line betting result data only the last 6 rounds, another that use the same data for the last 7 rounds, and so on up to one that uses data from the last 52 rounds.

Then, as I did in creating HAMP and LAMP, I looked for the combination of floating window models that best predicts winning line bet teams.

The overall model I found to perform best combines 24 of the 47 floating window models - I'll spare you the Lotto-like list of those models' numbers here. In 2008 this model predicted the line betting winner 57% of the time and in 2009 it predicted 64% of such winners. Combined, that gives it a 61% average across the two seasons. I'll call this model the Highly Evolved Line Predictor (or HELP), the 'highly evolved' part of the name in recognition of the fact that it was selected because of its fitness in predicting line betting winners in the environment that prevailed across the 2008 and 2009 seasons.

Whether HELP will thrive in the new environment of the 2010 season will be interesting to watch, as indeed will be the performance of LAMP and HAMP.

In my previous post I drew the distinction between fitting a model and using it to predict the future and explained that a model can be a good fit to existing data but turn out to be a poor predictor. In that context I mentioned the common statistical practice of fitting a model to one set of data and then measuring its predictive ability on a different set.

HAMP, LAMP and HELP are somewhat odd models in this respect. Certainly, when I've used them to predict they're predicting for games that weren't used in the creation of any of their underlying floating window models. So that's a tick.

They are, however, fitted models in that I generated a large number of potential LAMPs, HAMPs and HELPs, each using a different set of the available floating window models, and then selected those models which best predicted the results of the 2008 and 2009 seasons. Accordingly, it could well be that the superior performance of each of these models can be put down to chance, in which case we'll find that their performances in 2010 will drop to far less impressive levels.

We won't know whether or not we're witnessing such a decline until some way into the coming season but in the meantime we can ponder the basis on which we might justify asserting that the models are not mere chimera.

Recall that each of the floating window models use as predictive variables nothing more than the price of the Home team. The convoluted process of combining different floating window models with time-varying weights for each means that, in essence, the predictions of HAMP, LAMP and HELP are all just sophisticated transformations of one number: the Home team price for the relevant game.

So, for HAMP, LAMP and HELP to be considered anything other than statistical flukes it needs to be the case that:

  1. the TAB Sportsbet bookie's Home team prices are reliable indicators of Home teams' victory margins and line betting success
  2. the association between Home team prices and victory margins, and between Home team prices and line betting outcomes varies in a consistent manner over time
  3. HAMP, LAMP and HELP are constructed in such a way as to effectively model these time-varying relationships

On balance I'd have to say that these conditions are unlikely to be met. Absent the experience gained from running these models live during a fresh season then, there's no way I'd be risking money on any of these models.

Many of the algorithms that support MAFL Funds have been developed in much the same way as I've described in this and the previous blog, though each of them is based on more than a single predictive variable and most of them have been shown to be profitable in testing using previous seasons' data and in real-world wagering.

Regardless, a few seasons of profitability doesn't rule out the possibility that any or all of the MAFL Fund algorithms haven't just been extremely lucky.

That's why I'm not retired ...

There Must Be 50 Ways to Build a Model (Reprise)

Okay, this posting is going to be a lot longer and a little more technical than the average MAFL blog (and it's not as if the standard fare around here could be fairly characterised as short and simple).

Anyway, over the years of MAFL, people have asked me about the process of building a statistical model in sufficient number and with such apparent interest that I felt it was time to write a blog about it.

Step one in building a model is, as in life, finding a purpose and the purpose of the model I'll be building for this blog is to predict AFL victory margins, surely about as noble a purpose as a model can aspire to. Step two is deciding on the data that will be used to build that model, a decision heavily influenced by expedience; often it's more a case of 'what have I already got that might be predictive?' rather than 'what will I spend the next 4 weeks of my life trying to source because I've an inkling it might help?'.

Expediently enough, the model I'll be building here will use a single input variable: the TAB Sportsbet price of the home team, generally at noon on Wednesday before the game. I have this data going back to 1999, but I've personally recorded prices only since 2006. The remainder of the data I sourced from a website built to demonstrate the efficacy of the site-owner's subscription-based punting service, which makes me trust this data about as much as I trust on-site testimonials from 'genuine' customers. We'll just be using the data for the seasons 2006 to 2009.

Fitting the Simplest Model

The first statistical model I'll fit to the data is what's called an ordinary least-squares regression - surely a name to cripple the self-esteem of even the most robust modelling technique - and is of the form Predicted Margin = a + (b / Home Team Price).

The ordinary least-squares method chooses a and b to minimise the sum of the (squared) differences between the actual victory margin and that which would be predicted using it and, in this sense, 'fits' the data best of all the possible choices of a and b that we could make.

We've seen the result of fitting this model to the 2006-2009 data in an earlier blog where we saw that it was:

Predicted Margin = -49.17 + 96.31 / Home Team Price

This model fits the data for seasons 2006 to 2009 quite well. The most common measure of how well a model of this type fits is what's called the R-squared and, for this model, it's 0.236, meaning that the model explains a little less than one-quarter of the variability in margins across games.

But this is a difficult measure to which to attach any intuitive meaning. Better perhaps is to know that, on average, the predictions of this model are wrong by 29.3 points per game and that, for one-half of the games it is within 24.1 points of the actual result, and for 27% of the games it is within 12 points.

These results are all very promising but it would be a rookie mistake to start using this model in 2010 with the expectation that it will explain the future as well as it has explained the past. It's quite common for a statistical model to fit existing data well but to forecast as poorly as a surprised psychic ('Jeez, I didn't see that coming!').

Why? Because forecasting and fitting are two very different activities. When we build the model we deliberately make the fit as good as it can be and this can mean that the model we create doesn't faithfully represent the process that created that data. This is known in statistical circles - which, I guess, are only round on average - as 'overfitting' the data and it's one of the many things over which we obsess.

Overfitting is less likely to be a problem for the current model since it has only one variable in it and overfitting is more commonly a disease of multi-variable models, but it's something that it's always wise to check. A bit like checking that you've turned the stove off before you leave home.

Testing the Model

The biggest problem with modelling the future is that it hasn't happened yet (with apologies to whoever I stole or paraphrased that from). In modelling, however, we can create an artificial reality where, as far as our model's concerned, the future hasn't yet happened. We do this by fitting the model to just a part of the data we have, saving some for later as it were.

So, here we could fit the 2006 season's data and use the resulting model to predict the 2007 results. We could then repeat this by fitting a model to the 2007 data only and then use that model to predict the 2008 results, and then do something similar for 2009. Collectively, I'll call the models that I've fitted using this approach "Single Season" models.

Each Single Season model's forecasting ability can be calculated from the difference between the predictions it makes and the results of the games in the subsequent season. If the Single Season models overfit the data then they'll tend to fit the data well but predict the future badly.

The results of fitting and using the Single Season models are as follows:

2010 - Bookie Model Comparisons.png

The first column, for comparative purposes, shows the results for the simple model fitted to the entire data set (ie all of 2006 to 2009), and the next three columns show the results for each of the Single Season models. The final column averages the results for all the Single Season models and provides results that are the most directly comparable with those in the first column.

On balance, our fears of overfitting appear unfounded. The average and median prediction errors are very similar, although the Single Season models are a little worse at making predictions that are within 3 goals of the actual result. Still, the predictions they produce seem good enough.

What Is It Good For?

The Single Season approach looks promising. One way that it might have a practical value is if it can be used to predict the handicap winners of each game at a rate sufficient to turn a profit.

Unfortunately, it can't. In 2007 and 2008 it does slightly better than chance, predicting 51.4% of handicap winners, but in 2009 it predicts only 48.1% of winners. None of these performances is good enough to make money since, at $1.90 you need to tip at better than 52.6% to make money.

In retrospect, this is not entirely surprising. Using a bookie's own head-to-head prices to beat him on the line market would be just too outrageous.

Hmmm. What next then?

Working with Windows

Most data, in a modelling context, has a brief period of relevance that fades and, eventually, expires. In attempting to predict the result of this week's Geelong v Carlton game, for example, it's certainly relevant to know that Geelong beat St Kilda last week and that Carlton lost to Melbourne. It's also probably relevant to know that Geelong beat Carlton when they last played 11 weeks ago, but it's almost certainly irrelevant to know that Carlton beat Collingwood in 2007. Finessing this data relevance envelope by tweaking the weights of different pieces of data depending on their age is one of the black arts of modelling.

All of the models we've constructed so far in this blog have a distinctly black-and-white view of data. Every game in the data set that the model uses is treated equally regardless of whether it pertains to a game played last week, last month, or last season, and every game not in the data set is ignored.

There are a variety of ways to deal with this bipolarity, but the one I'll be using here for the moment is what I call the 'floating window' approach. Using it, a model is always constructed using the most recent X rounds of data. That model is then used to predict for just the current week then rebuilt next week for the subsequent week's action. So, for example, if we built a model with a 6-round floating window then, in looking to predict the results for Round 8 of a given season we'd use the results for Rounds 2 through 7 of that season. The next week we'd use the results for Rounds 3 through 8, and so on. For the early rounds of the season we'd reach back and use last year's results, including finals.

So, next, I've created 47 models using floating windows ranging from 6-round to 52-round. Their performance across seasons 2008 and 2009 is summarised in the following charts.

First let's look at the mean and median APEs:

2010 - Floating Window APE.png

Broadly what we see here is that, in terms of mean APE, larger floating windows are better than smaller ones, but the improvement is minimal from about an 11-round window onwards. The median APE story is quite different. There is a marked minimum with a 9-round floating window, and 8-round and 10-round floating windows also perform well.

Next let's take a look at how often the 47 models produce predictions close to the actual result:

2010 - Floating Window Accuracy.png

The top line charts the percentage of time that the relevant model produces predictions that are 3-goals or less distant from the actual result. The middle line is similarly constructed but for a 2-goal distance, and the bottom line is for a 1-goal distance.

Floating windows in the 8- to 11-round range all perform well on all three metrics, consistent with their strong performance in terms of median APE. The 16-round, 17-round and 18-round floating window models also perform well in terms of frequently producing predictions that are within 2-goals of the actual victory margin.

Next let's look at how often the 47 models produce predictions that are very wrong:

2010 - Floating Window Accuracy 36.png

In this chart, unlike the previous chart, lower is better. Here we again find that larger floating windows are better than smaller ones, but only to a point, the effect plateauing out with floating windows in the 30s

Again though to consider each model's potential punting value we can look at its handicap betting performance.

2010 - Floating Window Line.png

On this measure, only the model with an 11-round floating window seems to have any exploitable potential.

But, like Columbo, we just have one more question to ask of the data ...

Dynamic Weighted Floating Windows

(Warning: This next bit hurts my head too.)

We now have 47 floating window models offering an opinion on the likely outcomes of the games in any round. What if we pooled those opinions? But, not opinions are of equal value, so which opinions should we include and which should we ignore? What if we determined which opinions to pool based on the ability of different subsets of those 47 models to fit the results of, say, the last 26 rounds before the one we're trying to predict? And what if we updated those weights each round based on the latest results?

Okay, I've done all that (and yes it took a while to conceptualise and code, and my first version, previously published here, had an error that caused me to overstate the predictive power of one of the pooled models, but I got there eventually). Here's the APE data again now including a few extra models based on this pooling idea:

2010 - Floating Window APE with Dyn.png

(The dynamic floating window model results are labelled "Dynamic Linear I (22+35)" and "Dynamic Linear II (19+36+39+52)" The numbers in brackets are the Floating Window model forecasts that have been pooled to form the Dynamic Linear model. So, for example, the Dynamic Linear I model pools only the opinions of the Floating Window models based on a 22-round and a 35-round window. It determines how best to weight the opinions of these two Floating Window models by optimising over the past 26 rounds.

I've also shown the results for the Single Season models - they're labelled 'All of Prev Season' - and for a model that always uses all data from the start of 2006 up to but excluding the current round, labelled 'All to Current'.)

The mean APE results suggest that, for this performance metric at least, models with more data tend to perform better than models with less. The best Dynamic Linear model I could find, for all its sophistication still only managed to produce a mean APE 0.05 points per game lower than the simple model that used all the data since the start of 2006, weighting each game equally.

It is another Dynamic Linear model that shoots the lights out on the median APE results, however. The Dynamic Linear model that optimally combines the opinions of 19-, 36-, 39- and 52-round floating windows produces forecasts with a median APE of just 22.54 points per game.

The next couple of charts show that this superior performance stems from this Dynamic Linear model's all-around ability - it isn't best in terms of producing the most APEs under 7 points nor in terms of producing the fewest APEs of 36 points or more.

2010 - Floating Window Accuracy with Dyn.png
2010 - Floating Window Accuracy 36 with Dyn.png

Okay, here's the clincher. Do either of the Dynamic Linear models do much of a job predicting handicap winners?

2010 - Floating Window Line with Dyn.png

Nope. The best models for predicting handicap winners are the 11-round floating window model and the model formed by using all the data since the start of 2006. They each manage to be right just over 53% of the time - a barely exploitable edge.

The Moral So Far ...

What we've seen in these results is consistent with what I've found over the years in modelling the footy. Models tend to be highly specialised, so one that performs well in terms of, say, mean APE, won't perform well in terms of median APE.

Perhaps no surprise then that none of the models we've produced so far have been any good at predicting handicap margin winners. To build such a model we need to start out with that as the explicit modelling goal, and that's a topic for a future blog.

Predicting Margins Using Market Prices and MARS Ratings

Imagine that you allowed me to ask you for just one piece of data about an upcoming AFL game. Armed with that single piece of data I contend that I will predict the margin of that game and, on average, be within 5 goals of the actual margin. Further, one-half of the time I'll be within 4 goals of the final margin and one-third of the time I'll be within 3 goals. What piece of data do you think I am going to ask you for?

I'll ask you for the bookies' price for the home team, true or notional, I'll plug that piece of data into this equation:

Predicted Margin = -49.17 + 96.31 x (1 / Home Team Price)

(A positive margin means that the Home Team is predicted to win, a negative margin that the Away Team is predicted to win. So, at a Home Team price of $1.95 the Home Team is predicted to win; at $1.96 the Away Team is predicted to squeak home.)

Over the period 2006 to 2009 this simple equation has performed as I described in the opening paragraph and explains 23.6% of the variability in the victory margins across games.

Here's a chart showing the performance of this equation across seasons 2006 to 2009.

2010 Predicted v Actual Margin.png

The red line shows the margin predicted by the formula and the crosses show the actual results for each game. You can see that the crosses are fairly well described by the line, though the crosses are dense in the $1.50 to $2.00 range, so here's a chart showing only those games with a home team price of $4 or less.

How extraordinary to find a model so parsimonious yet so predictive. Those bookies do know a thing or two, don't they?

Now what if I was prohibited from asking you for any bookie-related data but, as a trade-off, was allowed two pieces of data rather than one? Well, then I'd be asking you for my MARS Ratings of the teams involved (though quite why you'd have my Ratings and I'd need to ask you for them spoils the narrative a mite).

The equation I'd use then would be the following:

Predicted Margin = -69.79 + 0.779 x MARS Rating of Home Team - 0.702 x MARS Rating of Away Team

Switching from the bookies' brains to my MARS' mindless maths makes surprisingly little difference. Indeed, depending on your criterion, the MARS Model might even be considered superior, your Honour.

The prosecution would point out that the MARS Model explains about 1.5% less of the overall variability in victory margins, but the case for the defence would counter that it predicts margins that are within 6 points of the actual margin over 15% of the time, more than 1.5% more often than the bookies' model does, and would also avow that the MARS model predictions are 6 goals or more different from the actual margin less often than are the predictions from the bookies' model.

So, if you're looking for a model that better fits the entire set of data, then percent of variability explained is your metric and the bookies' model is your winner. If, instead, you want a model that's more often very close to the true margin and less often very distant from it, then the MARS Model rules.

Once again we have a situation where a mathematical model, with no knowledge of player ins and outs, no knowledge of matchups or player form or player scandals, with nothing but a preternatural recollection of previous results, performs at a level around or even above that of an AFL-obsessed market-maker.

A concept often used in modelling is that of information. In the current context we can say that a bookie's home team price contains information about the likely victory margin. We can also say that my MARS ratings have information about likely victory margins too. One interesting question is does the bookie's price have essentially the same information as my MARS ratings or is there some additional information content in their combination?

To find out we fit a model using all three variables - the Home Team price, the Home Team MARS Rating, and the Away Team MARS Rating - and we find that all three variables are statistically significant at the 10% level. On that basis we can claim that all three variables contain some unique information that helps to explain a game's victory margin.

The model we get, which I'll call the Combined Model, is:

Predicted Margin = -115.63 + 67.02 / Home Team Price + 0.31 x MARS Rating of Home Team - 0.22 x MARS Rating of Away Team

A summary of this model and the two we covered earlier appears in the following table:

2010 Bookies v MARS.png

The Combined Model - the one that uses the bookie price and MARS ratings - explains over 24% of the variability in victory margin and has an average absolute prediction error of just 29.2 points. It produces these more accurate predictions not by being very close to the actual margin more often - in fact, it's within 6 points of the actual margin only about 13% of the time - but, instead, by being a long way from the actual margin less often.

Its margin prognostications are sufficiently accurate that, based on them, the winning team on handicap betting is identified a little over 53% of the time. Of course, it's one thing to fit a dataset that well and another thing entirely to convert that performance into profitable forecasts.

The Draw's Unbalanced: So What?

In an earlier blog we looked at how each team had fared in the 2010 draw and assessed the relative difficulty of each team's draw by, somewhat crudely, estimating a (weighted) average MARS of the teams they played. From the point of view of the competition ladder, however, what matters is not the differences in the average MARS rating of the teams played, but how these differences, on a game-by-game basis, translate into expected competition points.
Read More

A Place (or Places) To Call Home

It's that time of year again when we need to ruminate over the draw and decide which team, if either, is really the home team for each game. This year these decisions are even more complex than they usually are since, according to the AFL's official home team designations, 8 teams have two home grounds and 2 more have three home grounds across the season. The nomadic teams include 4 that play only a single game at one of their designated home grounds. I don't think it makes sense to designate a venue as home for a team if that team's players would, unaccompanied by team management, need to ask directions to find it.
Read More