Predictions 2014-2015
Re: Predictions 2014-2015
Rob, I am talking about the guy who commented on your article with a set of predictions of his own. I didn't make that clear. Sorry. I also got confused whether your predictions were included initially because I was looking for a PPS label instead of Bobb but I eventually figured that part out.
Re: Predictions 2014-2015
Relative to b-r.com's Forecast, and relative to last year's pythagorean:
http://www.basketball-reference.com/fri ... f_prob.cgi
I'm glad to be out of the bottom quartile; but it's really a 5-way tie for last place.
The season's about 44% done.
Relative to the average of our 15 sets of predictions, teams that project to miss by the most:Overachieving teams on the left, unders on the right.
Code: Select all
. err -14py err -14py
crow 6.30 2.65 bobb 7.58 1.37
myst 6.58 2.37 itca 7.61 1.35
AJb1 6.61 2.35 eW 7.78 1.17
bbs 6.76 2.19 fpli 7.79 1.17
HDon 6.76 2.19 AJb2 7.80 1.16
atc 7.05 1.90 snd1 7.80 1.15
v-0 7.10 1.85 DrP 7.80 1.15
ncs 7.36 1.59 14py 8.96 .00
I'm glad to be out of the bottom quartile; but it's really a 5-way tie for last place.
The season's about 44% done.
Relative to the average of our 15 sets of predictions, teams that project to miss by the most:
Code: Select all
over tm avg proj under tm avg proj
17 Mil 26 43 -16 Cle 59 43
13 Atl 44 56 -16 Min 33 18
8 GSW 54 63 -14 NYK 35 21
8 Tor 47 56 -13 Okl 54 41
8 Por 48 56 -11 Mia 44 34
7 Sac 29 36 -9 SAS 57 47
5 Uta 28 33 -8 Cha 40 32
5 Bos 29 35 -5 LAC 57 52
5 Was 45 50 -1 Den 38 37
5 LAL 26 30 -1 Brk 37 36
4 NOP 39 43 -1 Phx 46 45
3 Orl 27 30 -1 Ind 38 37
3 Mem 49 52 0 Phl 16 16
2 Dal 51 54 0 Det 34 34
2 Chi 50 52
2 Hou 49 50
Re: Predictions 2014-2015
Thanks for the second table. Pretty easy and common I'd think to say coaching performance was a big part of top 6 overachievers. Probably less common for worst 6 underachievers, with reasons but maybe not enough to explain away the whole gap.
Among the predictors in the contest, the avg. error from mean to absolute best and worst is only about plus or minus 12%. So one could look at it like a range between best and 75% of best, which is not nothing, if best is good or very good.
Among the predictors in the contest, the avg. error from mean to absolute best and worst is only about plus or minus 12%. So one could look at it like a range between best and 75% of best, which is not nothing, if best is good or very good.
Re: Predictions 2014-2015
Will there be a declared machine learning entrant in future?
Re: Predictions 2014-2015
I think machine learning for these predictions may not be optimal because it's not getting the opportunity to continuously learn from data. If the predictions could be updated throughout the season, then a machine learning algorithm could continuously learn from data during the season to improve predictions.Crow wrote:Will there be a declared machine learning entrant in future?
But that actually brings me to another question. I read through this thread and noticed that many of you used "bottom-up" methods to make predictions; you used some system to value players, and then you estimated minutes for each player. Although the current prediction challenge is a good way to test systems, it also hurts the people that forecasted minutes poorly.
Would anyone be interested in doing an ongoing retrodiction challenge? I'm not sure how often people update their own statistics, but it would be a cool challenge for people that do frequent updates. Suppose you update your rating system weekly. Every week, you could save your ratings somewhere (CSV on Dropbox, Google Sheets, etc.). I could write a script that take these ratings right before the start of every game and saves them. Then the next morning, it could take the actual minutes played by each player and calculate the expected point differential. We can keep track of the RMSE between actual point differential and expected.
Does something like this already exist, or would any of you be interested?
Re: Predictions 2014-2015
I asked about updated ratings just after season started and no one responded.
Twice big retrodiction or prediction tournaments have been discussed but never completed. Is Neil still going to pursue? It is after New Years and no sign yet. If you want to try or wait n try, best wishes- and include one or more blended metric.
The machine learner could of course run their tests on many back seasons. Something that some but probably not all the manual predictors did.
Twice big retrodiction or prediction tournaments have been discussed but never completed. Is Neil still going to pursue? It is after New Years and no sign yet. If you want to try or wait n try, best wishes- and include one or more blended metric.
The machine learner could of course run their tests on many back seasons. Something that some but probably not all the manual predictors did.
-
- Posts: 151
- Joined: Sun Jul 14, 2013 4:58 am
- Contact:
Re: Predictions 2014-2015
I think it'd be cool to add Arturo's projections to the group.
https://arturogalletti.wordpress.com/20 ... r-2014-15/.
(his final pre-season predictions are down at the bottom of the post)
https://arturogalletti.wordpress.com/20 ... r-2014-15/.
(his final pre-season predictions are down at the bottom of the post)
Re: Predictions 2014-2015
Arturo loves to show all his work. Generally a good thing but maybe some could be trimmed or hidden unless clicked.
Is the boxscoregeek alliance "breaking up" or "evolving"?
If you are certifying that the projections at bottom of article are from pre-season, then yeah lets see how they are doing. My memory is that he has not had an above average performance in previous years.
Is the boxscoregeek alliance "breaking up" or "evolving"?
If you are certifying that the projections at bottom of article are from pre-season, then yeah lets see how they are doing. My memory is that he has not had an above average performance in previous years.
-
- Posts: 151
- Joined: Sun Jul 14, 2013 4:58 am
- Contact:
Re: Predictions 2014-2015
I'm not a general fan of the BoxScore Geek / Win Produced stuff, but I like Arturo because he seems less dogmatic about Wins Produced and more interested in tinkering to find answers to his questions.
Re: Predictions 2014-2015
Approaching the midpoint in the season
Code: Select all
rel. to pred, '14py rel. to pred, '14py
AJb1 6.42 2.48 eW 7.63 1.27
crow 6.46 2.43 ncs 7.64 1.26
myst 6.71 2.19 AJb2 7.67 1.23
atc 6.87 2.03 itca 7.78 1.12
v-0 6.98 1.92 DrP 7.82 1.07
HDon 7.04 1.86 fpli 8.18 .72
bbs 7.06 1.84 snd1 8.19 .71
bobb 7.43 1.47 14py 8.90 .00
Re: Predictions 2014-2015
From that link:nbacouchside wrote:I think it'd be cool to add Arturo's projections to the group.
https://arturogalletti.wordpress.com/20 ... r-2014-15/.
..
One of the more interesting about models is that a descriptive metric is not necessarily a predictive metric. ...
He's a bit careless in switching "descriptive" and "explanatory". "Describing" something isn't the same as "explaining" it, any more than a blind man feeling up one part of an elephant can explain how the animal lives.Dave took this model and used the season total data to build a robust explanatory model.
If you have truly Explained something, you're most of the way toward Predicting its future behavior.
Re: Predictions 2014-2015
I think those are fairly standard terms for describing (regression type) models. They aren't meant to be read colloquially. Kind of like how the stuff on one side of a regression can be called the independent variable, predictor variable, explanatory variable, controlled variable, manipulated variable.... even though in any given circumstance none of those labels may apply.
Re: Predictions 2014-2015
So the variables one uses merely "explain" how one arrives at the resultant value? Yet we should not presume that value has any, uh, value?
What then does it mean that someone has a " robust explanatory model." -- ?
If it "robustly" explains player values, then it should also predict future player and team performance.
If it merely distributes player performance to correlate to team performance -- yet when players are shuffled among teams, the correlations drop precipitously -- then what has been done "robustly"?
What then does it mean that someone has a " robust explanatory model." -- ?
If it "robustly" explains player values, then it should also predict future player and team performance.
If it merely distributes player performance to correlate to team performance -- yet when players are shuffled among teams, the correlations drop precipitously -- then what has been done "robustly"?
-
- Posts: 416
- Joined: Tue Nov 27, 2012 7:04 pm
Re: Predictions 2014-2015
Mike, could you share RMSE values too? RMSE is more reliable than absolute average in this kind of test imo.
Re: Predictions 2014-2015
Sure thing: RMSE -- root mean squared error -- is the square root of (the average of the squared errors).
http://www.basketball-reference.com/fri ... f_prob.cgi
With un-squared errors, Crow leads by .03; so they're neck-and-neck.
I'm not sure why it's more 'reliable' to square the errors; it's harder to state what you're looking at, if RMSE happens not to be in your vocabulary. It may be equally valid to avg the square roots of errors -- the "a miss is as good as a mile" approach. After all, teams may tank, or rest, toward season's end and really exaggerate the effect even more.
Then, too, it may have even more predictive value to compare our predictions to the teams' pythagorean performance. If last year you guessed the Wolves would win 48, that's just how they performed, in MOV-land. But for whatever reasons, they won just 40. If someone guessed 40, was he more right? By RMSE or by straight error?
One could throw SOS into the mix; and then the Wolves might have won 48, or 52, had they been in the East. But none of these led to their real life record of 40-42. That was more about losing the close games.
Which brings us full circle to whether we should rank teams based on their MOV or by some fractional exponent of their game-by-game MOV: You might collapse blowout scores by taking the square root of scoring margin, and average these over the season. That might more reliably predict playoffs, for example.
Code: Select all
err RMSE <14py err RMSE <14py
AJb1 7.92 3.49 HDon 8.93 2.48
crow 7.93 3.48 ncs 9.20 2.21
myst 8.36 3.05 bobb 9.32 2.09
atc 8.49 2.92 AJb2 9.55 1.86
itca 8.51 2.90 fpli 9.58 1.83
bbs 8.52 2.89 eW 9.81 1.60
v-0 8.84 2.58 snd1 9.82 1.59
DrP 8.90 2.52 14py 11.41 .0
With un-squared errors, Crow leads by .03; so they're neck-and-neck.
I'm not sure why it's more 'reliable' to square the errors; it's harder to state what you're looking at, if RMSE happens not to be in your vocabulary. It may be equally valid to avg the square roots of errors -- the "a miss is as good as a mile" approach. After all, teams may tank, or rest, toward season's end and really exaggerate the effect even more.
Then, too, it may have even more predictive value to compare our predictions to the teams' pythagorean performance. If last year you guessed the Wolves would win 48, that's just how they performed, in MOV-land. But for whatever reasons, they won just 40. If someone guessed 40, was he more right? By RMSE or by straight error?
One could throw SOS into the mix; and then the Wolves might have won 48, or 52, had they been in the East. But none of these led to their real life record of 40-42. That was more about losing the close games.
Which brings us full circle to whether we should rank teams based on their MOV or by some fractional exponent of their game-by-game MOV: You might collapse blowout scores by taking the square root of scoring margin, and average these over the season. That might more reliably predict playoffs, for example.