Page 9 of 13
Re: 2017-18 team win projection contest
Posted: Tue Dec 19, 2017 1:52 pm
by Mike G
Now a bunch at the top.
Code: Select all
avg abs err RMSE avg abs err RMSE
RyRi 4.88 6.07 Josh 5.26 6.56
sndi 4.89 6.12 vegas 5.28 6.39
gold 4.94 6.34 kmed 5.37 6.59
shad 4.98 6.08 GK5. 5.61 6.93
sbs. 4.99 6.09 emin 5.82 6.72
538_ 5.03 6.48 Mike 5.84 6.88
cali 5.07 6.38 Crow 5.90 7.17
AJBk 5.13 6.58 ATCt 6.08 7.40
lnqi 5.23 6.19 ncs. 6.24 7.64
Nath 5.25 6.47 knar 6.30 7.55
Note RyanRiot also has a .01 lead in the RMSE column.
Different exponents put several different people in the lead:
up to .16 -- caliban
.17 to .64 -- 538
.65 to .86 -- sndesai
.87 to 2.1 -- RyRi
2.2 to 7.2 -- shadow
7.3 and up -- eminence
Re: 2017-18 team win projection contest
Posted: Tue Dec 19, 2017 4:14 pm
by eminence
A vote for 7.3 and up

Re: 2017-18 team win projection contest
Posted: Wed Dec 20, 2017 11:42 am
by Mike G
At the 8th power of errors, the leaders alongside the worst-predicted teams:
Code: Select all
tm emin shad sndi sbs. lnqi
LAC 677 979 677 1040 767
NYK 402 351 524 376 677
Ind 266 199 248 285 376
Den 92 39 199 51 30
Det 51 25 61 47 15
LAL 36 25 51 25 9
Mil 17 15 6 10 6
GSW 17 20 36 19 6
...
tot 1600 1665 1809 1857 1901
These are actually
millions of exaggerated errors. So emin has a total of 1.6 billion; which is great next to the last-place entry at this level of exaggeration, AJ at 19 bill.
Since yesterday, the leaders have reshuffled.
Code: Select all
exp = 1 exp = 2
sndi 4.76 shad 5.98
RyRi 4.80 RyRi 6.00
shad 4.87 sbs. 6.02
gold 4.90 sndi 6.05
sbs. 4.94 lnqi 6.14
RyRi range has shrunk to [1.28 - 1.82] -- he'd be the leader in any blend of MAE and RMSE.
Dec 24 update
Code: Select all
avg abs err RMSE avg abs err RMSE
sndi 4.74 6.01 Nath 5.16 6.41
RyRi 4.76 5.90 lnqi 5.20 6.07
shad 4.81 5.96 kmed 5.40 6.62
sbs. 4.83 5.95 GK5. 5.41 6.68
gold 4.87 6.27 emin 5.49 6.44
538_ 4.91 6.41 Mike 5.69 6.85
vegas 4.98 6.28 Crow 5.71 7.02
cali 5.05 6.36 ncs. 5.89 7.46
Josh 5.07 6.25 ATCt 5.96 7.22
AJBk 5.10 6.50 knar 6.11 7.44
The top half of the group are separated by .36 -- bottom half by .95
Dec 27 update. Those ^^ separations have grown to .50 and 1.00
Code: Select all
avg abs err RMSE avg abs err RMSE
RyRi 4.34 5.53 lnqi 4.89 5.66
sbs. 4.47 5.59 Nath 4.93 6.08
shad 4.55 5.62 GK5. 4.98 6.30
sndi 4.67 5.78 kmed 5.11 6.41
gold 4.68 5.88 emin 5.34 6.27
538_ 4.69 6.09 ncs. 5.47 7.10
vegas 4.70 5.91 Mike 5.47 6.58
Josh 4.77 5.91 Crow 5.52 6.74
AJBk 4.83 6.15 ATCt 5.67 6.91
cali 4.84 6.00 knar 5.88 7.11
These are now the smallest errors of the year.
Re: 2017-18 team win projection contest
Posted: Fri Dec 29, 2017 10:06 am
by Mike G
In 2 weeks, all our avg errors are better, by an avg of .50
Code: Select all
impr Dec 15th 29th
.99 ncs. 6.13 5.13
.81 Josh 5.26 4.45
.79 GK5. 5.56 4.77
.76 vegas 5.18 4.42
.76 RyRi 4.90 4.14
.64 shad 4.91 4.27
.62 sbs. 4.90 4.28
.51 emin 5.77 5.26
.49 AJBk 5.09 4.60
.43 538_ 5.06 4.63
.40 Mike 5.72 5.32
.39 ATCt 5.85 5.46
.39 gold 4.99 4.60
.37 kmed 5.32 4.96
.34 knar 6.04 5.70
.33 Nath 5.06 4.73
.31 Crow 5.72 5.41
.29 lnqi 5.04 4.75
.25 cali 4.90 4.65
.07 sndi 4.75 4.67
RyRi at 4.14 is lowest so far.
Re: 2017-18 team win projection contest
Posted: Mon Jan 01, 2018 12:49 am
by jgoldstein34
Here's how my model has the contest compared to my own projections going into tonight's game:
Just the average of all our projections would be in 2nd so far.
Re: 2017-18 team win projection contest
Posted: Tue Jan 02, 2018 1:57 pm
by Mike G
We've broken 4.0 in the b-r.com projections also:
Code: Select all
abs avg err RMSE abs avg err RMSE
RyRi 3.97 5.06 538_ 4.62 5.84
sbs. 4.05 5.13 * avg 4.79 5.85
AJBk 4.28 5.62 kmed 4.91 5.97
Josh 4.40 5.51 Nath 4.98 5.93
shad 4.44 5.17 ncs. 5.21 6.74
vegas 4.46 5.53 emin 5.33 6.07
gold 4.49 5.42 Mike 5.33 6.35
lnqi 4.52 5.21 17PyR 5.44 6.55
cali 4.54 5.57 ATCt 5.53 6.52
GK5. 4.55 5.85 Crow 5.56 6.50
sndi 4.62 5.53 knar 5.67 6.75
Funny, our group average ranks much higher in jgold's list. He also has the best teams winning 2-4 more games than b-r.com gets. In fact, the worst teams may be tanking later, and the contenders getting more serious.
Re: 2017-18 team win projection contest
Posted: Tue Jan 02, 2018 3:27 pm
by J.E.
Mike G wrote:In fact, the worst teams may be tanking later, and the contenders getting more serious.
Agree with the former, disagree with the latter. Teams that have their seed locked up will probably rest players near the end of the season
Re: 2017-18 team win projection contest
Posted: Tue Jan 02, 2018 6:03 pm
by Mike G
Well, they can't all get worse. Teams will be contending for various playoff seeds, with perhaps a few exceptions.
It may be just that b-r.com's forecasting is still regressing toward the mean.
It's pretty late in the season for the 2016-17 regressed pythagorean entry to be in the mix.
Re: 2017-18 team win projection contest
Posted: Wed Jan 03, 2018 5:55 am
by jgoldstein34
The spread on my projections is higher because B-R uses a lot of regression in their future game projections.
Re: 2017-18 team win projection contest
Posted: Mon Jan 08, 2018 9:25 pm
by Voyaging
Has anyone done any analysis of NBA's PIE rating (available on the official NBA stats site)?
A popular handicap site "sportsmodelanalytics.com" uses PIE as a core of their handicapping and they've got a pretty good track record (though they handicap individual games rather than make win total predictions preseason). According to the site, "a team's PIE rating and a team's winning percentage correlate at an R square of .908."
Thoughts? As someone with no background in statistics, how does that compare to other predictive models like the ones you guys are using?
Re: 2017-18 team win projection contest
Posted: Mon Jan 08, 2018 9:31 pm
by jgoldstein34
Update to the contest. Appears we all got a little bit worse in average error since the last update.

Re: 2017-18 team win projection contest
Posted: Mon Jan 08, 2018 9:48 pm
by Mike G
Oddly enough, jgold's projected errors are now larger than those from b-r.com
Code: Select all
avg abs err RMSE avg abs err RMSE
RyRi 3.87 5.13 538_ 4.59 5.85
sbs. 4.05 5.10 GK5. 4.66 6.03
shad 4.17 5.13 Nath 4.84 5.80
vegas 4.33 5.40 kmed 4.99 5.96
AJBk 4.39 5.74 Mike 5.14 6.20
gold 4.41 5.41 emin 5.15 5.90
lnqi 4.45 5.19 ncs. 5.26 6.71
Josh 4.49 5.73 ATCt 5.39 6.50
sndi 4.55 5.51 Crow 5.40 6.37
cali 4.59 5.54 knar 5.71 6.70
We did all get worse in the last couple of days, but RyRi weathered it pretty well. Best I noticed was 3.79 on Jan. 5.
Re: 2017-18 team win projection contest
Posted: Tue Jan 09, 2018 5:19 am
by Crow
I'll have to refresh my knowledge of PIE but my memory is that it is weak sauce, worse than the other better known weak box score metrics.
Re: 2017-18 team win projection contest
Posted: Tue Jan 09, 2018 5:27 am
by Crow
Yep, weak. PIE may be an improvement over "NBA efficiency" but that was one the simplest, weakest metrics out there for evaluating players. Correlation with team results is another very different thing. So it depends what you are trying to do. But with its simplistic weights and major omissions I doubt that it is very good or even competitive at that either.
Re: 2017-18 team win projection contest
Posted: Tue Jan 09, 2018 6:26 am
by Voyaging
Crow wrote:I'll have to refresh my knowledge of PIE but my memory is that it is weak sauce, worse than the other better known weak box score metrics.
Crow wrote:Yep, weak. PIE may be an improvement over "NBA efficiency" but that was one the simplest, weakest metrics out there for evaluating players. Correlation with team results is another very different thing. So it depends what you are trying to do. But with its simplistic weights and major omissions I doubt that it is very good or even competitive at that either.
That's what I would've thought, too, but I was surprised by how good the site's handicapping track record was.
Our college basketball model, the CBB Lockatron, formulated projections for just under 2000 college basketball games during the 2016-2017 season. It was its first season and with almost 2000 projections in the books, there's a lot of encouraging data and profit to look back on.
When the model's projection differed from the spread by at least 5 points, the model went 223-180 (55%) against the spread.
The model was most profitable when identifying value in the underdog. When the model liked the underdog to cover and the projection differed from the spread by at least 4.9 points, it went 149-103 (59%). The model also tracked public betting percentages. If you "faded the public" and only took the aforementioned plays (underdogs at 4.9+) that 40% or less of the public was on, the model's record was 62-38 (62%). At the very top segment of projections (the ones differing from the spread the most), the model was spot-on with underdogs. At a difference of 6.7 or greater, it went 68-45 (60%) with underdogs. Again, fading the public (40% or less) improved this record to 33-16 (67%).
When the model found significantly more value in the favorite's spread (5.2 difference or greater), it had moderate success, as long as it wasn't a play the public was absolutely hammering (85% action or less on the favorite). Its record was 73-62 (54%).
Not many CBB models can claim ATS (against the spread) win rates of 59%+ at their highest value points. We're excited for the 2017-2017 CBB season!
...
The NBA Lockatron formulated projections for just under 500 games during the 2016-2017 season. When projections differed enough from the spread to meet optimal criteria for underdogs or favorites, the model has spit out plays at a 65% win rate (50-27), and went on a 26-11 run against the spread from 03/10/17 through 04/19/17, before finally slowing down in the playoffs.
The model was revamped during the off-season and is back for the 2017-2018 season with a vengeance. In the first couple weeks of NBA action, the model has already started 9-1 on picks at a 6.6+ value.
Perhaps similar or superior results can be had by using better metrics and using a similar betting method.
Sorry if this isn't appropriate content for this thread, I just figured some people might be interested as perhaps aspects of the methodology could be applied to win total predictions, and I was surprised by their use of PIE which I would have thought was a poor choice.
I am particularly interested in this quote:
A high PIE % is highly correlated to winning. In fact, a team's PIE rating and a team's winning percentage correlate at an R square of .908.
I don't know enough about statistics to evaluate this claim. Are there any similar R square correlation to win % evaluations of other all-in-one player metrics?