Over the past 30 days, I ran a side project testing three custom betting models I’ve been developing in my spare time. Each one was built to target a different angle of NBA betting—totals, player props, and game spread overlays based on officiating data. Out of the three, one model stood out and absolutely dominated in betting NBA totals.
Here’s what worked:
The best-performing model combined pace-adjusted team scoring averages, rest differentials, and fatigue factors (especially from back-to-back travel schedules). It also incorporated a volatility component tracking overnight line movement differentials. The goal was to isolate when public money pushed totals too high or low based on outdated perception.
Out of 52 games, the model correctly predicted the total outcome 68.7% of the time—that’s 35 wins, 17 losses. The ROI came in at +23.4%, betting 1 unit per game. These weren’t cherry-picked games either—I set the criteria beforehand and stuck with it.
I also tracked closing line value (CLV), and the model beat the close by an average of 1.7 points, which is honestly even more encouraging than the raw win rate. Shows it’s not just lucky—it’s identifying real inefficiencies.
Now, for context:
Always curious if anyone else is modeling totals or working on similar ideas. Do you factor in altitude? Game tempo deltas? Bench usage on short rest? Let’s share notes. We all know no edge lasts forever—unless we keep sharpening it.
Here’s what worked:
The best-performing model combined pace-adjusted team scoring averages, rest differentials, and fatigue factors (especially from back-to-back travel schedules). It also incorporated a volatility component tracking overnight line movement differentials. The goal was to isolate when public money pushed totals too high or low based on outdated perception.
Out of 52 games, the model correctly predicted the total outcome 68.7% of the time—that’s 35 wins, 17 losses. The ROI came in at +23.4%, betting 1 unit per game. These weren’t cherry-picked games either—I set the criteria beforehand and stuck with it.
I also tracked closing line value (CLV), and the model beat the close by an average of 1.7 points, which is honestly even more encouraging than the raw win rate. Shows it’s not just lucky—it’s identifying real inefficiencies.
Now, for context:
- Model 2 (Player Prop Overlays): Used usage rates vs. matchup trends. Decent accuracy but erratic ROI (+3.1%).
- Model 3 (Ref Tendency Bias): Built off officiating history and foul rate analysis. Fun, but too noisy. Basically breakeven.
Always curious if anyone else is modeling totals or working on similar ideas. Do you factor in altitude? Game tempo deltas? Bench usage on short rest? Let’s share notes. We all know no edge lasts forever—unless we keep sharpening it.