Thursday, 9 January 2014

Goal Review (and) Technology

No, not the ludicrous AFL goal review 'technology' they implemented in season 2014, but a review of our own performance against the goals set at the start of the year (from this blog post).

And as a preamble, we did set ambitious targets, and as the year progressed it was obvious these were too ambitious.


REVIEWING THE 2013 GOALS:
1. A season 'correct tip' rate of +80%, which equates to 159 for the home and away season, and 166 for the whole year.
      Goal missed, with a return rate of only 70% (146 of 207 tipped).

2. A season score on the Monash Uni system of 1995, which equates to 9.64 points/game.
      Goal missed, with a Monash score of 1881 at 9.1 points per game. (Noting a
      'should be' score given we missed tipping when overseas).


Missing the above two goals was a disappointment. But hindsight suggests that they were optimistic, to say the least.
Relative to 2012, we were down on both Monash score and raw tips. Also, the tipsters at Monash were down slightly as well on 2012.

3. Following Carlo Monty, lets also shoot for 5 perfect rounds on footytips.com, for 5 free Hungry Jacks Whoppers.
      Goal missed, with only 2 free burgers earned (and consumed).

Again... it seems pretty obvious now that we overstated our expectations at the start of the season


Technology In Tipping
In terms of overall performance vs other tipping modelers, we are happy to list ourselves as having had a reasonable year, as the table at right can attest to.

We have performed better than most of the (known to us) models used in 2013, including the Swinburne University computer model, which is a nice surprise.

Of note too... the MAFL Online blog appears to have multiple methods to determine winners, so the best and worst (taken from the third graphic in this post) have been used to illustrate their spread relative to ours.


Also, an additional unit of measure was added to our "Tipping Record" side-bar late in the season... which you can see listed as 'MAPE'.
As we are tipping not just winners, but also margins, the closeness of the tip is important. The MAPE is the Mean Absolute Predicted Error, so a lower number is required here. And we were able to finish off season 2013 with a MAPE of 27.4.

Again comparing this to the third graphic in this post by the MAFL Online blog (and in particular the column listed Mean Absolute Pred Error [Season]) sees us fitting into their top 7 performances for the year (from 16 models).


So while we underachieved in meeting our objectives set at the beginning of the year, we were on par with other models performances during season 2013.
Setting goals has been a worthwhile activity along with measuring to them and also to benchmarking against the competitive systems.


Thanks List:
1. Huge thanks to the mysterious (and non-twitter devotee) 'Carlo Monte' of the WayBeyondRedemption blog for assistance on the benchmarking.
2. Even huge-er thanks to Russ (@idlesummers), for without whose help, this blog would not exist.

1 comment:

  1. You are more than welcome mate. Glad to see someone has the energy to make good use of the techniques.

    On bench-marking, I suspect there is a limit to how many tips a 'good' model correctly makes, based on the variability of results and the expected margins. If every team was equal you'd not be able to tip better than 50% except via luck. As they are not, the calculated average probability of victory is probably the theoretical upper average limit (that's benchmarking against yourself I guess, but still).

    That also means it is basically impossible to win a (big) tipping competition with a model, because you're hitting slightly over average, whereas with thousands (even dozens) of competitors, someone, somewhere, is randomly tipping against the tide and winning.

    Best of luck in the year ahead.

    ReplyDelete