2022-23 team win projection contest
Re: 2022-23 team win projection contest
Thanks Mike for the leaderboard.
Re: 2022-23 team win projection contest
Looking back at page 1, I missed a few submissions, including those behind paywalls.
I'm not distinguishing between self-submitted and brought by others; nor between those based on single or multiple measurements vs "gut" and insight-based predictions.
At one time, it seemed like a good way to "test" various metrics. But we've never agreed, for example, on player minutes estimate, or rookie expectations, etc.
In any case, at the top of page 3 should be a good place for a translation for the 4-character abbrevs; feel free to correct or amend.
538E = 538.com Elo
538R = 538.com Raptor
emin = eminence*
ncs. = nbacouchside*
EExp = ESPN Experts
EBPI = ESPN BPI
vegas = https://www.espn.com/nba/story/_/id/347 ... -questions
veg2 = draftkings - https://www.vegasinsider.com/nba/odds/win-totals/
dtka = dtkavana*
trzu = tarrazu*
vzro = v-zero*
Crow = Crow*
TmRk = TeamRankings
drko = "DARKO model forecast by Kostya Medvedovsky."
nuFi = numberFire
MPra = Mike Prada
LEBR = LEBRON by bball-index.com
2022 = last season's wins
22Py = Pythagorean 'wins'
* = APBRMetrics member, self-submitted
Thanks to DarkStar48 for all the others.
I'm not distinguishing between self-submitted and brought by others; nor between those based on single or multiple measurements vs "gut" and insight-based predictions.
At one time, it seemed like a good way to "test" various metrics. But we've never agreed, for example, on player minutes estimate, or rookie expectations, etc.
In any case, at the top of page 3 should be a good place for a translation for the 4-character abbrevs; feel free to correct or amend.
538E = 538.com Elo
538R = 538.com Raptor
emin = eminence*
ncs. = nbacouchside*
EExp = ESPN Experts
EBPI = ESPN BPI
vegas = https://www.espn.com/nba/story/_/id/347 ... -questions
veg2 = draftkings - https://www.vegasinsider.com/nba/odds/win-totals/
dtka = dtkavana*
trzu = tarrazu*
vzro = v-zero*
Crow = Crow*
TmRk = TeamRankings
drko = "DARKO model forecast by Kostya Medvedovsky."
nuFi = numberFire
MPra = Mike Prada
LEBR = LEBRON by bball-index.com
2022 = last season's wins
22Py = Pythagorean 'wins'
* = APBRMetrics member, self-submitted
Thanks to DarkStar48 for all the others.
Re: 2022-23 team win projection contest
Quick update on the occasion of everyone improving since yesterday -- some by a lot.
Almost every underachieving team won, while the overachievers mostly lost.eminence error improved by .52 overnight; ESPN Experts by .61
Current win projections, ranked by difference from avg guesses in this contest:
https://www.basketball-reference.com/fr ... _prob.html
UPDATE Nov. 8
Crazy night in the NBA. Atl>Mil, Ind>NO, Phl>Phx, LAC>Cle, Chi>Tor
While half the field got worse overnight, some made major gains: ncs better by .29, dtka by .22, vzro .37
Almost every underachieving team won, while the overachievers mostly lost.
Code: Select all
. avg err rmse avg err rmse
ncs. 5.82 7.52 emin 7.17 8.86
drko 6.29 7.96 TmRk 7.23 9.06
vzro 6.56 7.45 vegas 7.25 9.24
dtka 6.63 7.73 veg2 7.31 9.19
538E 6.68 8.88 EExp 7.41 9.41
trzu 6.79 8.35 nuFi 7.55 9.31
EBPI 6.79 7.89 22Py 7.83 9.81
Crow 6.91 8.39 MPra 8.24 10.28
538R 6.97 8.22 LEBR 8.29 9.57
2022 7.07 8.95
Current win projections, ranked by difference from avg guesses in this contest:
Code: Select all
West proj over East proj over
Uta 47.9 17 Cle 56.0 10
OKC 38.9 15 Ind 35.4 7
Por 46.1 11 Orl 33.0 6
Hou 29.5 7 Chi 43.6 5
Phx 55.5 5 Tor 50.9 4
SAS 31.3 4 Mil 53.6 2
Dal 51.6 4 Was 35.6 0
NOP 47.6 2 Det 25.7 0
Sac 35.7 1 NYK 39.9 0
Mem 44.7 -4 Cha 32.2 -2
LAL 33.4 -7 Atl 41.6 -5
Den 42.3 -8 Brk 41.0 -6
Min 39.3 -8 Bos 49.0 -6
LAC 35.2 -13 Phl 42.0 -10
GSW 32.8 -18 Mia 39.1 -10
UPDATE Nov. 8
Crazy night in the NBA. Atl>Mil, Ind>NO, Phl>Phx, LAC>Cle, Chi>Tor
Code: Select all
. avg err rmse avg err rmse
ncs. 5.63 7.43 2022 7.12 8.93
vzro 6.29 7.49 vegas 7.15 9.46
drko 6.41 8.05 TmRk 7.16 9.28
dtka 6.50 7.78 veg2 7.23 9.41
538E 6.51 8.79 emin 7.26 9.06
EBPI 6.74 7.76 EExp 7.56 9.63
trzu 6.80 8.45 nuFi 7.69 9.43
538R 6.85 8.07 22Py 7.78 9.67
Crow 6.88 8.45 MPra 8.35 10.47
.. LEBR 8.40 9.72
Re: 2022-23 team win projection contest
Returning to talk of methodologies, as mentioned previously I don't use minutes projections. I have done so, I have in fact made my own model for doing the projections, but what I found in testing, both on a game to game basis, and on a season-long basis, is that predicting minutes is basically a waste of time. When comparing models that attempt to project minutes and build a roster, versus models that look at a roster at multiple depths (ordering by recent playing time) without trying to allocate playing time directly, I have found the latter to perform better.
I use a combination of box score and plus minus based models, all home baked and none public. I don't create player models, I create game models. I don't use any APM or RAPM either as a target variable in regression or as the basis for my plus minus model, I don't find them to be beneficial. I have extracted an extended box score from play by play data, which is definitely valuable, though no part of my models is more valuable than my plus minus based variant.
I don't do any hand tweaking, the models have complete say, the numbers I put on here were rather cobbled together after some experimentation with the models I already have. As I said previously my models are focused far more on game by game prediction than on looking at the long view, so I did a little scratching around on the long view to see what I could change to put something maybe decent on here. So far so good, but who knows where it'll all end up.
I use a combination of box score and plus minus based models, all home baked and none public. I don't create player models, I create game models. I don't use any APM or RAPM either as a target variable in regression or as the basis for my plus minus model, I don't find them to be beneficial. I have extracted an extended box score from play by play data, which is definitely valuable, though no part of my models is more valuable than my plus minus based variant.
I don't do any hand tweaking, the models have complete say, the numbers I put on here were rather cobbled together after some experimentation with the models I already have. As I said previously my models are focused far more on game by game prediction than on looking at the long view, so I did a little scratching around on the long view to see what I could change to put something maybe decent on here. So far so good, but who knows where it'll all end up.
-
- Posts: 151
- Joined: Sun Jul 14, 2013 4:58 am
- Contact:
Re: 2022-23 team win projection contest
Can you explain a little more what you mean re: not building player models, but "game models"? Are you using team-based statistics to create the game models, i.e., using the box stats and plus-minus at the team level to project future performance?v-zero wrote: ↑Thu Nov 10, 2022 12:11 pm Returning to talk of methodologies, as mentioned previously I don't use minutes projections. I have done so, I have in fact made my own model for doing the projections, but what I found in testing, both on a game to game basis, and on a season-long basis, is that predicting minutes is basically a waste of time. When comparing models that attempt to project minutes and build a roster, versus models that look at a roster at multiple depths (ordering by recent playing time) without trying to allocate playing time directly, I have found the latter to perform better.
I use a combination of box score and plus minus based models, all home baked and none public. I don't create player models, I create game models. I don't use any APM or RAPM either as a target variable in regression or as the basis for my plus minus model, I don't find them to be beneficial. I have extracted an extended box score from play by play data, which is definitely valuable, though no part of my models is more valuable than my plus minus based variant.
If all that is the case, what does the roster depth model, rather than projecting MP, in the first paragraph do (if there are no player values to use as part of that depth calculation) or is that separate from your process here for projecting wins and included as an observation from your own experience?
Re: 2022-23 team win projection contest
No problem, I probably wasn't clear. I maintain datasets predominantly at the player level. My methodology in terms of keeping those statistics up to date is broadly similar to DARKO. What I don't then do is feed player statistics into boosted trees (like DARKO) or any other model to then predict individual player stats game to game. That is predominantly because I have tried that and it performs (for what I want to achieve) worse than directly tuning for my desired outcomes, which gets me to the point:nbacouchside wrote: ↑Thu Nov 10, 2022 8:52 pm Can you explain a little more what you mean re: not building player models, but "game models"? Are you using team-based statistics to create the game models, i.e., using the box stats and plus-minus at the team level to project future performance?
If all that is the case, what does the roster depth model, rather than projecting MP, in the first paragraph do (if there are no player values to use as part of that depth calculation) or is that separate from your process here for projecting wins and included as an observation from your own experience?
I use large arrays of ranked player-level data to represent a roster, and use that data to predict the outcomes of games. The outcomes of games are the only target variables I ever use. The ranking is basically according to recently allocated playing time, which is as close as I get to using minutes projections.
-
- Posts: 151
- Joined: Sun Jul 14, 2013 4:58 am
- Contact:
Re: 2022-23 team win projection contest
That makes sense. Thanks for the clarity!
Re: 2022-23 team win projection contest
Couchside and v-zero (and dtkavana and anyone else), can you explain why you predicted the Warriors to substantially drop off this year?
Last year they won 55, Pythagorean was 57, and they seemed to really "gel" thru the postseason.
They didn't lose much of their core (other than Otto Porter), they have Klay for more than 40% of games, and they seemed to have a stable of young up-and-comers. Plus they picked up JaMychal Green and have Wiseman back.
They aren't a particularly old team, about 27.5 years on avg last season and this season -- prime NBA age!
Contest submissions averaged 50.4 wins for the Dubs, in spite of you guys' lowballing them.
Steph has been great; Wiggins and Looney are their normal selves; and everyone else on the team has under-performed.
B-r.com projects about 33 wins at this point.
Last year they won 55, Pythagorean was 57, and they seemed to really "gel" thru the postseason.
They didn't lose much of their core (other than Otto Porter), they have Klay for more than 40% of games, and they seemed to have a stable of young up-and-comers. Plus they picked up JaMychal Green and have Wiseman back.
They aren't a particularly old team, about 27.5 years on avg last season and this season -- prime NBA age!
Contest submissions averaged 50.4 wins for the Dubs, in spite of you guys' lowballing them.
Steph has been great; Wiggins and Looney are their normal selves; and everyone else on the team has under-performed.
B-r.com projects about 33 wins at this point.
Re: 2022-23 team win projection contest
Bog standard regression to the mean takes them from 55 to around 51, and without digging into it I can tell you my plus minus model is not at all impressed with exchanging Otto Porter for JaMychal Green amongst their seven most played guys.
Barring an injury to Curry there is no way in hell they win only 33 games this year, however. I still expect about 43.
Barring an injury to Curry there is no way in hell they win only 33 games this year, however. I still expect about 43.
-
- Posts: 151
- Joined: Sun Jul 14, 2013 4:58 am
- Contact:
Re: 2022-23 team win projection contest
My projections expected a lot of minutes for the young guys and my model mostly thinks they aren't very good and they lost GP2 in addition to Otto, both of whom are pretty good to very good. Add in mean regression and wrong side of the age curve stuff for the big 3.
Also maybe biggest issue of all is that I only included data from the last 3 regular seasons, with 2 seasons before last year of data where the Warriors effort could charitably described as disinterested.
Also maybe biggest issue of all is that I only included data from the last 3 regular seasons, with 2 seasons before last year of data where the Warriors effort could charitably described as disinterested.
Re: 2022-23 team win projection contest
Another crazy night of 'upsets' -- though some of these were with major players not playing.
The leader's lead shrank from .63 to .42 overnight.
Code: Select all
. avg err rmse avg err rmse
ncs. 5.74 7.62 emin 7.22 9.06
drko 6.16 8.03 vegas 7.23 9.66
EBPI 6.31 7.86 TmRk 7.24 9.50
vzro 6.38 7.72 veg2 7.35 9.64
dtka 6.47 8.04 2022 7.35 9.23
538E 6.61 8.69 nuFi 7.74 9.56
Crow 6.81 8.55 22Py 7.75 9.90
trzu 6.84 8.59 EExp 7.92 9.93
538R 7.00 8.12 LEBR 7.99 9.75
. MPra 8.20 10.48
Re: 2022-23 team win projection contest
Couchside with the early jump.
I'll have to try to catch 3rd before I think beyond that. 3rd wouldn't take that much change, with it being this early. But 13th wouldn't take that much either.
I'll have to try to catch 3rd before I think beyond that. 3rd wouldn't take that much change, with it being this early. But 13th wouldn't take that much either.
Re: 2022-23 team win projection contest
On a personal level I am much more interested in MSE than MAE, but I don't really have any expectations.
Re: 2022-23 team win projection contest
I go for and am more interested in MAE. Every point of miss is the same value to me, not ever more.
MSE more so than MAE really punishes out of mainstream guesses that don't pan out and generally rewards convention hugging.
To each their own style and gauge.
The regular season has become less serious and somewhat less interesting. A playoff contest would probably appeal more to me but there wasnt any sign of other interest in that this spring.
MSE more so than MAE really punishes out of mainstream guesses that don't pan out and generally rewards convention hugging.
To each their own style and gauge.
The regular season has become less serious and somewhat less interesting. A playoff contest would probably appeal more to me but there wasnt any sign of other interest in that this spring.
Re: 2022-23 team win projection contest
For me being within three or four wins is pretty much as good as nailing the prediction, given that that is within one standard deviation. Nailing predictions is luck, in the end, and a better measure of accuracy is probably to predict SRS or some other MOV metric rather than wins.
So, having said that, I value an even distribution of errors far more than getting anything 'spot on', hence penalising very poor predictions makes sense. I'd rather miss three totals by three wins each, than nail two and miss one by nine.
So, having said that, I value an even distribution of errors far more than getting anything 'spot on', hence penalising very poor predictions makes sense. I'd rather miss three totals by three wins each, than nail two and miss one by nine.