Probability Score as a Predictor of Profitability : A More General Approach

We've spent some time now working with the five parameter model, using it to investigate what We've spent some time now working with the five parameter model, using it to investigate what various wagering environments mean for the relative and absolute levels of profitability to Kelly-staking and Level-staking. The course we followed in the simulations for the earliest blogs was to hold some of the five parameters constant and to vary the remainder. We then used the simulation outputs to build rules of thumb about the profitability of Kelly-staking and of Level-staking. These rules of thumb were described in terms of the values of the parameters that we varied, which made them practically useful only if we felt we could estimate quantities such as the Bookie's and the Punter's bias and variability. The exact values of these parameters cannot be inferred from an actual set of bookmaker prices, wagers and results because they depend on knowledge of the true Home team probability in every game. More recent blogs have provided rules based on probability scores, which are directly related to the underlying value of the bias and variability of the bookie or punter that produced them, but which have the decided advantage of being directly measurable.
Read More

Probability Score Thresholds: Reality Intrudes

If you've been following the series of posts here on the five-parameter model, in particular the most recent one, and you've been tracking the probability scoring performance of the Head-to-Head Fund over on the Wagers & Tips blog, you'll be wondering why the Fund's not riotously profitable at the moment. After all, its probability score per game is almost 0.2, well above the 0.075 that I estimated was required for Kelly-Staking to be profitable. So, has the Fund just been unlucky, or is there another component to the analysis that explains this apparent anomaly?
Read More

Probability Score as a Predictor of Profitability: Part 2

In the previous blog I came up with some rules of thumb (rule of thumbs?) for determining what probability score was necessary to be profitable when following a Kelly-staking or a Level-staking approach, and what probability score was necessary to favour one form of staking over the other.

Briefly, we found that, when the overround is 106%, Bookie Bias is -1%, Bookie Sigma is 5%, and when the distribution of Home team probabilities broadly mirrors the historical distribution from 1999 to the present, then:

  1. If the Probability Score is less than 0.035 per game then Kelly-staking will tend to be unprofitable 
  2. If the Probability Score is less than 0.014 per game then Level-staking will be unprofitable 
  3. If the Probability Score is less than 0.072 per game then Level-staking is superior to Kelly-staking

Taken together these rules suggest that, when facing a bookie of the type described, a punter should avoid betting if her probability scoring is under 0.014 per game, Level-stake if it's between 0.014 and 0.072, and Kelly-stake otherwise.

For this blog we'll determine how these rules would change if the punter was faced with a slightly more talented and greedier bookmaker, specifically, one with an overround of 107.5%, a bias of 0% and a sigma of 5%.

In this wagering environment the rules become:

  1. If the Probability Score is less than 0.075 per game then Kelly-staking will tend to be unprofitable 
  2. If the Probability Score is less than 0.080 per game then Level-staking will be unprofitable 
  3. If the Probability Score is less than 0.074 per game then Level-staking is superior to Kelly-staking (but is generally unprofitable)

Taken together these rules suggest that, when facing a bookie of the type now described, a punter should avoid betting if her probability scoring is under 0.075 per game and Kelly-stake otherwise. Level-staking is never preferred in this wagering environment because Level-staking is more profitable than Kelly-staking only for the range of probability scores for which neither Level-staking nor Kelly-staking tends to be profitable.

Essentially, the increase in the talent and greed of the bookmaker has eliminated the range of probability scores for which Level-staking is superior and increased the minimum required probability score to make Kelly-staking profitable from 0.072 to 0.075 per game.

Probability Score as a Predictor of Profitability

For the current blog the questions we'll be exploring are: whether a Predictor's probability score has any relevance to its ability to produce a profit; the relationship between a Predictor's probability score and the bias and variability of its probability assessments; for a Predictor that produces probability assessments that generate a given probability score, whether Kelly-staking or Level-staking is more profitable
Read More

Estimating Bookie Bias and Variability in Home Team Probability Assessments

This blog is another in the series of blogs about simulating the contest between bookmaker and punter (for details see the 1st blog, 2nd blog, 3rd blog, and 4th blog). In these blogs we've estimated the relative importance of the bias and variability in a bookmaker's home team probability assessments relative to the bias and variability in the punter's assessments.
Read More

To Bet or Not to Bet?

In an earlier blog, using the 5 parameter model first discussed here, I summarised the results of simulating 100 seasons played out under each of 1,000 different parameter sets in a pair of rules that described when Kelly-staking tends to be superior to Level-staking, and vice versa. Implicitly, that blog assumed that we were going to wager, so our concern was solely with selecting the better wagering approach to adopt. But there is, of course, a third option that dare not speak its name and that is not to bet at all. In this blog I'll extend the previous analysis and derive rules for when we should Kelly-Stake, when we should Level-Stake and when we should just upstakes and leave.
Read More

Simulating the Finals (After Week 2 of the Finals)

Another week, another 1,000,000 simulations.

This week it's a relatively brief summary, since there are only four teams remaining and because TAB Sportsbet no longer has markets for teams' chances of making the Grand Final nor for the possible Grand Final pairings.

Here's what I have by way of simulation inputs and outputs:

 

Firstly note that I have the Dogs, who have the higher MARS Rating, as favourites to defeat the Saints. That certainly isn't the way that the TAB Sportsbet bookie is seeing it: in the row labelled "Offer" you can see that he currently has the Saints at $3 for the Flag and the Dogs at $11. Not surprisingly, the simulations suggest the Dogs are very good value at that price - hence the double-asterisk under them.

The Cats too, however, are value in the Flag market, priced as they are at $3. The simulations suggest that anything over about $2.60 represents value for them.

Collingwood, at $2.75, are about $0.45 too short to be attractive, and the Saints, at $3, are way too short to be worthy of a punt.

All four possible Grand Final pairings are near equally likely, with a Cats v Dogs matchup marginally the most likely pairing for the Big One, and a Pies v Saints matchup the least likely.

Simulating the Finals (After Week 1 of the Finals)

Okay, six teams left, and you know the drill.

Here's the probability matrix I'm using this week (and again we're ignoring any home ground benefits):

Running the now customary 1,000,000 simulations using this matrix gives us the probabilities shown in the tables below. The rows labelled "Fair" contain fair value prices and those labelled "Offer" contain the most-recent TAB Sportsbet prices.

Double-asterisks denote those wagers that the simulations suggest represent value. As has been the case for a while now, the simulations suggest that the Dogs represent value in both the Flag and the Plays-in-the-GF market.

As well, the Cats are a value bet at $4 for the Flag; the simulations suggest that any price over about $3.30 for them represents value.

In contrast, Sydney and Freo represent appalling value at only $13 and $41 respectively. You'd instead want about $30 and $125 respectively to feel as though you were obtaining fair value for this pair.

Lastly, a look at the GF Quinellas, where you'll notice that only 9 possible Grand Final pairings remain possible:

Here, the value combinations are indicated by green shading, and all of the value bets involve the Dogs.

Whether the Dogs are value bets or not all still comes down to the extent to which you believe that the Dogs' recent performances are the best indicators of their likely performance next week and, if they survive, in the weeks that follow.

Simulating the Finals (After Round 22)

I've just run a quick 1 million simulations of the finals using the following probability matrix, which is based on each team's current MARS Rating and assumes that there is no home ground advantage in the finals:

With this matrix and a slab of CPU seconds, I obtain the following probabilities for each team (1) playing in, and (2) winning the Grand Final:

Probabilities shown shaded green are those corresponding to value TAB Sportsbet wagers. Still we find that the TAB bookie is - according to MARS Ratings and my simulations - undervaluing the Dogs. He has them priced at $11 and I estimate them to be about $6.50 chances. That's quite a difference.

I'm also now suggesting that the TAB have the Cats inappropriately rated, so a wager on them at $2.50 represents (slender) value. I rate them value at $2.45 and above.

Finally, a quick look at the GF Quinellas:

Only the Collingwood v Dogs matchup continues to offer value. It's currently priced at $6; I'd be happy to take anything over $5.60.

(I note in passing that the only impossible GF pairings from amongst the eight finalists are Fremantle v Hawthorn and Sydney v Carlton, since these are the Week 1 finals matchups and one team from each of those pairings will be bowing out next weekend.)

Who Left the Dogs Out?

Freo's selectorial shenanigans of last weekend appear to have spooked the TAB Sportsbet bookie, so much so that he's not yet posted any of the line markets.

I expect he'll address this oversight sometime in the next 24 hours, in which case I'll post the week's wagering and tipping details tomorrow night.

In the meantime, here's a follow up blog on simulating the finals.

You'll recall from a previous blog that there are eight likely team orderings for the finals. They are the following:

In the earlier blog, I reported how little it mattered for any team's flag chances which of these orderings eventuated - no team's probability varied by more than 0.5% in the simulated results across all eight orderings.

What I didn't report there was how much each team's chances of playing in the Grand Final varied across these eight orderings. So, let's have a look at that.

The cells highlighted in green flag the ordering that maximises a particular team's chances of playing in the Grand Final. So, for example, finishing order A - the current ladder order - maximises Hawthorn's chances of running around in the Granny. Cells highlighted in red represent orderings that minimse a team's chances of playing in the Grand Final. St Kilda, for example, would least like ordering B, because it would probably see them play Carlton in Week 2 of the Finals, who are rated 1,009 on MARS.

Though individual team's chances of playing in the Grand Final do not vary by a great deal across the eight most likely orderings, we do see larger differences than we saw for team's flag chances - of the order of 1-2% for teams likely to finish in positions 3 to 8.

For Sydney, Freo, Hawthorn and Carlton - all teams with small absolute chances of winning the flag - differences of this magnitude are quite material. For example, Carlton's chances of winning the flag increase by almost 50% (from 3.6% to 5.3%) as we move from ordering F to ordering H.

In the earlier blog we found that the Dogs represented the only value wager in the flag market. TAB Sportsbet also offer a market for the Grand Final pairing. The following table sheds light on the value in this market. It shows the probability for each of the 28 possible Grand Final pairings, for each of the eight most likely orderings.

At the right of the table the range of fair value prices for each pairing is shown, based on the smallest and largest probability of that pairing occurring across the eight most likely final orderings.

Next to this range I've provided the latest prices on offer from TAB Sportsbet and flagged with a double-asterisk any pairing that represents value according to my simulated results.

As in the flag market, it's wagers involving the Dogs that seem to represent the greatest value in the GF pairings market: Grand Finals that have Geelong, Collingwood or Sydney facing the Dogs all look attractively priced based on current MARS Ratings.

Clearly, the TAB Sportsbet bookie rates the Dogs' chances very differently to MARS. He's significantly rerated the Dogs on the strength of their two most-recent outings. The Dogs have also seen their MARS Rating drop as a consequence of these two poor showings - by over 11.5 Ratings Points - but still lay claim to an impressive 1,031 MARS Rating.

It all comes down to how much weight you place on recent performance versus season-long pedigree.

Final Ladder Positions and Team's Flag Prospects After Round 21

The teams that will comprise the final 8 are now determined - barring Lazarian performances - but their final order is not.

Here's what the latest simulations show about each team's likely ladder finish:

With no real interest in who'll make the eight, I wondered instead how much it matters if, say, Sydney finishes 7th rather than 5th?

To answer this question I needed a probability matrix showing, for example possible pairing of teams in the 8, the probability of victory for each team. To create this matrix I used the current MARS Ratings for each team and the following equation:

Probability of Victory = exp((0.712*(Own MARS Rating - Opponent's MARS Rating))/22.3) / (1+exp((0.712*(Own MARS Rating - Opponent's MARS Rating))/22.3))

This equation is based on equations I derived in earlier blogs and, importantly, assumes that there is no home ground advantage in the finals. It provides this probability matrix:

In the 100,000 simulations used to create the first chart there were only eight relatively likely finishing orders for the top 8 teams. For the next round of simulations I played out the 4 weeks of the finals 100,000 times for each of these eight finishing orders and recorded which team won the Grand Final in each simulation.

The results of this are summarised in the table below. As you move down the rows the relative likelihood of that particular set of ladder positions shrinks, from about 27% for the top row (which, as well as being the most likely finishing order is also the current order) to about 1% for the bottom row.

What's startling about this table is how little variability there is across the rows - these simulation results suggest that, this season, a team's finishing order will have very little impact on its absolute prospects of winning the flag.

Roughly speaking, Geelong are about a 39% chance of winning the flag, Collingwood are at about 31%, St Kilda's about 11%, the Dogs about 15%, and the rest are all around 0.5-1.5% each.

The lack of variability between the rows, I'd suggest, is largely because positions 1 to 4 are already determined - so there's no source of variability there - and because the teams in positions 5 to 8, based on their relative MARS Ratings, have little chance of toppling the teams above them. None of the teams in positions 5 to 8 has a probability greater than 38% of defeating a team from the top 4 even once, so doing it two or three times - which is what they'll need to do to win the flag - seems very unlikely.

If you believe these simulations, then the fair value prices are as follows (the current TAB Sportsbet prices are in brackets and any value bets are in bold):

  • Collingwood $3.25 ($3)
  • Geelong $2.55 ($2.50)
  • St Kilda $8.95 ($5)
  • Bulldogs $6.75 ($9)
  • Sydney $101 ($26)
  • Fremantle $336 ($51)
  • Hawthorn $74 ($15)
  • Carlton $87 ($51)

So, only the Dogs at $9 offer any value.

Final Ladder Positions: Simulations After Round 20

As I type this late on Sunday evening all ladder-related markets are suspended on TAB Sportsbet, so I'll necessarily be excluding any discussion of value bets in these markets. I'll post subsequently on this once the markets are up.

In the meantime, here's what my simulations now make of each team's chances.

Minor Premiership Market

As conclusions go, this one's now about as foregone as you can get.

Simulations suggest that the Pies' chances for the minor premiership are now greater than 98%, up about 0.7% on last week, making their fair-value price $1.02.

The Cats remain the only other team with any hope of the minor premiership, and their fair- value price is now over $63.

Top 4 Market

Geelong and St Kilda joined Collingwood this week as the teams with assured Top 4 spots, leaving the Bulldogs and Freo to duel for the remaining spot.

In reality though Fremantle having only the slightest chance to grab 4th since to do so, given their inferior percentage, they'd probably need to beat both Hawthorn and Carlton, and hope the Dogs lost to Sydney and Essendon. This scenario has a probability of less than 1% according to the simulations.

Fair-value prices for the Dogs and Fremantle are $1.01 and $117 respectively.

Finals Market

Carlton and Sydney this week cemented their finals berths and Hawthorn just about did the same. The Hawks are now 92% chances, making their fair-value price $1.09.

The Roos' loss to the Saints lopped over 20% from their finals chances, leaving them with a probability of just under 6% of playing in 3 weeks time. Melbourne also ran a scythe through their finals aspirations in going down to the Hawks and now sport only a 2% chance of playing in September.

Fair-value for the Roos in this market is now $17.30 and for the Dees it's $51.20.

The Spoon Market

As the Pies have one hand on the minor premiership, so the Eagles have a mortgage on the Spoon. They're now 99% chances to take home the cutlery.

Both the Lions and Richmond have mathematical chances to grab the Spoon for themselves, but those chances both start with a zero before the decimal point.

Fair-value for the Eagles is $1.01, for Richmond it's $150, and for the Lions it's $603.

Here are the full results of the new simulations.

With just two rounds to go, the range of possible ladder finishes for each team has narrowed dramatically, as you can see from the table below.

Most teams now have realistic chances of finishing in one of two or three spots. Only Carlton, Melbourne and Port Adelaide have four different potential finishes each with a probability of around 10% or more.

(Update @ 2pm on Monday:

Only one ladder-related market remains available on TAB Sportsbet, the Finals market, and it has Hawthorn at $1.02, the Roos at $11, and Melbourne at $51. At those prices there's no value on offer.)

Final Ladder Positions: Simulations After Round 19

Two of the markets for final ladder positions are no longer being offered by TAB Sportsbet. Collingwood's victory over the Cats has, it seems, all but determined the destination of the minor premiership, and the Lions' last-gasp win over the Eagles has done something similar for the Spoon.

Minor Premiership Market

Across the 100,000 simulations run this week, four teams finish in 1st place in at least one simulation. However, in 98% on those simulations, it's the Pies that take the minor premiership. That's why there's no longer a market being offered on this result.

The Cats finish 1st in about 2% of the simulations, while St Kilda and Freo each take out the minor premiership in so small a fraction of the simulations that we might as well call it none.

Top 4 Market

Collingwood is assured of a Top 4 spot - in none of the simulations could my script create a scenario that dumped the Pies out of the first four spots - and Geelong is virtually assured of another.

The Saints' and the Dogs' chances are also very good. In the simulations they missed the top 4 only about 3 or 4% of the time. When one of them did miss it was Fremantle that grabbed the spot. The simulations suggest Fremantle is about a 6% chance of finishing in the top 4.

Based on the simulation results there's a tiny edge in backing the Saints at $1.03, which is the price that the TAB's currently offering. I wouldn't be rushing though as the fair price is only about $1.02.

Finals Market

Hawthorn's loss to Sydney significantly reduced their chances of making the finals, knocking 18% points from their probability, which dropped it to 73%. Losses by Port Adelaide and Adelaide all but eliminated their already-slim chances of participating in September.

The probability surrendered by Hawthorn, Port and the Crows was transferred to the Swans, whose probability rose 10% to 86%, to the Blues, whose probability rose 7% to 94%, and to the Roos, whose probability rose 9% to 26%.

Amongst the five teams on which TAB Sportsbet is willing to field bets for a top 8 finish there's only value for three of team - Carlton, Sydney and Hawthorn - and the edge on each is again small.

The Spoon Market

West Coast took out the Spoon in about 94% of the simulations, Richmond in about 5%, and the Lions in a bit less than 1%. The TAB Sportsbet bookie doesn't feel inclined to frame a market based on these lopsided chances so the market no longer exists.

Here are the full results of the new simulations.

Next, a summary of the same 100,000 simulations showing each team's most likely ladder finish, other relatively high-probability potential finishes for that team, and the best and worst ladder positions that the team occupied in at least one of the 100,000 simulations.

Notice how much more compact are the distributions of final ladder positions for the teams currently towards the top or the bottom of the ladder than they are for the teams mid-ladder such as the Roos, Melbourne, Adelaide, the Dons and Port. Melbourne, for example, finished as high as 6th in about 3% of the simulations and as low as 14th in a very small fraction of them.

To help you with your own prognostications about the makeup of the final 8, here's a table showing which teams meet in the remaining 3 rounds and the most likely result as determined by the MARS-Rating based model I've used for the simulations:


 

Finding Value in the Markets for Final Ladder Positions

In an earlier blog I used a simple method to create one view of the season's final home-and-away ladder. Those projections had Collingwood finishing third because it lost narrowly to the Cats in Round 19 while the Cats and the Saints were projected to win all their remaining games. Three of the Cats' wins and two of the Saints' were by only a handful of points, however, so it was easy to imagine scenarios in which the ordering at the top was quite different and where, for example, the Pies finish top of the ladder.
Read More

Projecting the Final Ladder for 2010 (After Round 16)

It's late July and normally it'd be time for the first whispers of tanking to emerge, but the rejigging of the draft process next year to accommodate the introduction of Gold Coast and GWS has significantly reduced the reward to sustained ineptitude so effectively that, this year, I doubt there'll be much talk of it (or, come to that, much compelling evidence of its existence). Another popular late-July activity is speculating about the finals and I think it's time we did a little of that.
Read More