2017 - Round 8 : Only Haggling Over the Margins
/Were it not for Home Sweet Home, this week we'd have a zero all-Tipster Disagreement Index across the Head-to-Head Tipsters.
Even Home Sweet Home's contrarian behaviour looks mostly perfunctory, it having gone against the tide in only two games. (Put another way, of course, this means that the home team is favourite in 7 of the 9 games. It's also true that, notwithstanding last weekend's results, the favourite in every game this week is the team higher on the competition ladder.)
As a result, the all-Tipster Disagreement Index has come in at just 5%, which is easily the lowest single-round value this season.
There's a little more disagreement amongst the Margin Predictors, but only about the sizes of the victories that the favourites will record.
C_Marg, for the third consecutive round, has the largest mean absolute deviation (MAD), with MoSSBODS_Marg and MoSHBODS_Marg in their customary different-but-not-extravagantly-different guises, filling the 2nd and 3rd spots.
The game generating the widest range of opinions this week is the Gold Coast v Port Adelaide contest, where the MAD is 6.1 points per Predictor and the predicted victory margins for Port Adelaide range from 11 to 30 points, book-ended by Bookie_9 and MoSSBODS_Marg.
There's a larger range (28 points) but smaller MAD (5.1 points per Predictor) for the Adelaide v Melbourne game where MoSHBODS_Marg has the Crows winning by just 27 points and C_Marg has them winning by 55 points.
The all-Predictor MAD is 4.2 points per Predictor per game, which is the 3rd-lowest value this season.
Amongst the Head-to-Head Probability Predictors, C_Prob stands alone, it, like C_Marg, also recording the highest MAD amongst its peers for the third week in succession. It has either the highest or the lowest probability estimates in all nine games, and its estimate of the Tigers' chances at just 59% has contributed significantly to that game ending up with the highest MAD for the week of 6.2% points per Predictor.
In six of the contests, however, the all-Predictor MAD is at 4% points or less, which has dragged the all games, all Predictor MAD down to just 3.8% points per game per Predictor, which is also the 3rd-lowest value of the season.
DISAGREEMENT AND PERFORMANCE
Each week I talk about Disagreement Indices (DIs) and MADs and provide an historical record of them as below.
To date, I've not related those metrics to forecaster performance, which I'll remedy today by providing the correlations between each forecaster's DI/MAD in the seven rounds so far and its accuracy/MAE/probability score in the corresponding round.
Amongst the Head-to-Head Predictors we see that all correlations are negative. That means that these Tipsters have tended to be less accurate the more they've disagreed with the group in a given round. This is less true for Home Sweet Home (its correlation is less negative), largely because it's generally done poorly whether it's agreed or disagreed more or less in any given week.
Conversely, MoSHBODS_Marg has tended to do worse the more it's disagreed with the herd.
The picture for the Margin Predictors is more interesting, with the two MoS, two RSMP, and C_Marg all tending to record lower MAEs the lower their MADs (and vice versa) in a given week. MoSSBODS_Marg has had the strongest positive relationship between its MAD and its ultimate MAE. In other words, it's done best when it's tended to agree with the crowd.
ENS_Linear's forecasts have shown the opposite behaviour - its MAEs have been lower when its MADs have been higher (and vice versa). It then has done best when it's been more contrarian. The same is true for Bookie_3 and Bookie_9 (and, to a much lesser extent, for Bookie_Hcap and Bookie_LPSO).
Looking lastly at the Head-to-Head Probability Predictors we again find negative correlations across the board, which tells us that lower levels of disagreement have tended to produce higher (ie better) probability scores. This has especially been the case for Bookie-RE and least of all the case for Bookie-LPSO.
Overall then we tend to find, as we might expect, that conformity has led to higher accuracy, lower MAEs, and higher probability scores, and that only a handful of Margin Predictors (most notably Bookie_9, Bookie_3, and ENS_Linear) have largish correlations with the opposite sign.
It's important to note though that we're only seven rounds into the season, so each of the correlations shown here is based on limited data. We'll check back on the numbers later in the season when we'll be able to draw firmer conclusions.
WAGERS
MoSSBODS and MoSHBODS have found enough to disagree with the bookmakers about this week to come up with 7 head-to-head and 4 line wagers.
Those bets put just under 15% of the original Head-to-Head Fund at risk, and just over 6% of the original Line Fund at risk, meaning that a set of worst-case outcomes would knock just over 5c off the price of the Overall Portfolio.
The range of outcomes is greatest in the Hawthorn v Brisbane Lions game where a Lions win would add 2c to the value of the Overall Portfolio, but a Lions loss by about 8 goals or more would reduce its value by 1c. In the Tigers v Dockers, and Suns v Power games, the ranges are almost as large.
All up, a perfect set of results would boost the Overall Portfolio by almost 7c.
Lastly, in preparation for the framing of the over/under markets, here are MoSSBODS' and MoSHBODS' opinions.
As usual, commentary will be provided later in the week, and please note that only MoSSBODS' opinion matters in terms of the Overs/Unders Fund wagering.