2012 : Naive MARS Revisited
/It seems obvious that some teams' MARS rankings remain elevated due mostly to the healthy carryover Ratings Points they dragged with them from 2011. Late last year I looked at a modified version of MARS that I called Naive MARS, which set every team's initial Rating to 1,000 unlike Traditional MARS which sets them to 530 + 47% of their final Rating in the season before.
The difference between team rankings using the Naive MARS approach and those using Traditional MARS is one way we can quantify the effects of this carryover policy. This information appears as the final two columns in the table below and relates to the position as at the end of Round 9.
As you can see, the largest differences in rankings between Traditional and Naive MARS are those for Adelaide, Collingwood, Essendon, Geelong, Hawthorn and Sydney. In every case, the rankings of Naive MARS are more consistent with those of the Colley, Massey and ODM Systems (see the columns to the left of those for Traditional and Naive MARS), which are all Rating Systems that also use only the results from the current season, at least in the way I've been using them on MAFL.
Whenever the Line Fund wagers on Collingwood, Geelong or Hawthorn, or wagers against Adelaide, Essendon or Sydney, the results of these bets are to some extent an assessment of the wisdom of Traditional MARS' carryover policy. So far this season the Line Fund is 4 and 4 on such wagers, which doesn't make the case either way.
The colour-coding in the table highlights the rankings that are outright highest (in green) and outright lowest (in red) across all the Rating Systems shown. Note that Colley has the extreme ranking for 5 teams, Massey for just 2, ODM for no teams at all, Traditional MARS for a whopping 10, and Naive MARS for just 3. More evidence then for the outlying nature of Traditional MARS team rankings.
That said, it's remarkable how highly positive is the correlation between the underlying team Ratings using the two MARS variants, even after only a single round's results in 2012.
At that point (see the table at left) the correlation already stood at +0.52, testimony to the at least partial persistence of form across the span of the two seasons.
By the end of the 3rd round the correlation had reached +0.76 meaning that almost 60% of the variability in team Ratings was common to both methods.
Now, after just nine completed rounds, the correlation is almost at +0.90 - well on the way to achieving the figure of +0.99 that we saw at the end of season 2011 when we made the same comparison.