Prediction Methodology
A detailed look at the quantitative framework behind every Olympus Bets prediction: Monte Carlo simulation, Kelly Criterion bankroll management, Bayesian probability calibration, and adaptive self-learning systems.
Monte Carlo Simulation: The Foundation
At the core of every Olympus Bets prediction is a Monte Carlo simulation engine. Rather than producing a single point estimate for a game's outcome, Monte Carlo methods run thousands of independent game simulations, each with randomized inputs drawn from calibrated probability distributions. The result is a full distribution of possible outcomes, not a single number.
For every game on the schedule, the system runs a minimum of 10,000 iterations. Each iteration simulates the game from start to finish using a league-specific engine that models the actual mechanics of how that sport is played. The aggregated results across all iterations produce probability distributions for moneyline outcomes, point spreads, and totals.
This approach captures something that deterministic models miss: variance. Sports outcomes are inherently stochastic. A team that is a 60% favorite will lose 4 out of 10 games. Monte Carlo simulation explicitly models this uncertainty rather than pretending it does not exist. The width of the outcome distribution tells you how confident the model actually is, not just which side it favors.
Why 10,000+ Iterations Matter
The number of simulation iterations directly affects the precision of probability estimates. With 1,000 iterations, a true 55% probability could appear anywhere from 52% to 58% due to sampling noise. At 10,000 iterations, the standard error drops to roughly 0.5 percentage points. This precision matters because the difference between a 54% and a 56% probability can determine whether a pick has a genuine edge against the market or not. Low-iteration models produce noisy probabilities that lead to false edges and overbetting.
League-Specific Simulation Engines
Each sport has fundamentally different game mechanics, and the simulation engines reflect this. A basketball engine cannot be repurposed for hockey any more than a chess engine can play poker. Olympus Bets maintains dedicated simulation engines for each league:
- NBA — Possession-by-possession simulation modeling shot selection, contest probability, transition opportunities, free throw sequences, and rotation patterns. Uses real player efficiency data and accounts for pace, home court advantage, and injury-adjusted lineups.
- NHL — Zone-based simulation tracking neutral zone entries, shot generation from each zone (slot, point, perimeter), expected goals (xG) models calibrated against MoneyPuck data, goaltender save probability by shot type, and special teams deployment. Version 19.1 uses Bayesian shrinkage for small-sample goaltender adjustments.
- College Basketball (CBB) — Player-level five-on-five matchup simulation incorporating EvanMiya BPR ratings, tempo adjustment, defensive matchup variance, offensive rebounding, and conference-strength calibration. Regime-gated ATS buckets adaptively adjust spread thresholds.
- NFL — Play-by-play simulation with drive sequencing, turnover modeling, red zone efficiency, and CDF-based edge calculation. Accounts for weekly roster changes, injury impacts, and weather effects.
- Soccer — Play-by-play simulation using FBref expected goals (xG) data, isotonic probability calibration, score-state modeling (teams behave differently when leading vs trailing), and both-teams-to-score (BTTS) modeling.
- MLB — Pitcher-vs-batter matchup simulation with platoon splits, catcher framing adjustments, bullpen fatigue tracking, sprint speed impact on baserunning, and Bayesian ERA estimation for small-sample pitchers.
- Esports (LoL) — Glicko-2 rating system with five-layer simulation, team composition analysis, patch-aware meta adjustments, and market blend calibration for thin markets.
Kelly Criterion: Optimal Bet Sizing
Identifying an edge is only half the problem. The other half is determining how much to bet. The Kelly Criterion, developed by John Kelly at Bell Labs in 1956, provides a mathematically optimal answer: bet a fraction of your bankroll proportional to the size of your edge relative to the odds offered.
The formula is straightforward:
Kelly % = (bp - q) / b
Where b is the decimal odds minus 1, p is the model's estimated probability of winning, and q is the probability of losing (1 - p). The result is the fraction of your bankroll that maximizes long-term geometric growth.
In practice, full Kelly sizing produces significant variance. Olympus Bets uses a fractional Kelly approach, mapping the raw Kelly percentage to a unit-based system that controls for drawdown risk while preserving the mathematical relationship between edge size and bet size.
Kelly Percentage to Units Mapping
| Kelly % | Units | Tier |
|---|---|---|
| 0 - 1% | 0.5u | Speculative |
| 1 - 3% | 1.0u | Standard |
| 3 - 6% | 1.5u | Confident |
| 6 - 10% | 2.0u | Strong |
| 10 - 15% | 2.5u | Very Strong |
| 15%+ | 3.0u | Maximum |
League-Specific Unit Caps
Different sports have different variance profiles. A 60% edge in the NFL (16-game sample) carries more uncertainty than a 60% edge in the NBA (82-game sample). The system enforces league-specific maximum unit sizes to account for this:
| League | Max Units | Rationale |
|---|---|---|
| NBA / NFL | 3.0u | Deep markets, high liquidity |
| NHL | 2.5u | Goaltending variance, parity |
| CBB | 2.5u | Large spread variance in college |
| Soccer | 2.0u | Three-way market (draw risk) |
Bayesian Kelly Shrinkage
Before Kelly sizing is applied, model probabilities undergo a Bayesian shrinkage step. The formula is: shrunk_prob = model_prob * 0.85 + 0.50 * 0.15. This pulls extreme probabilities toward 50%, reducing the risk of oversizing bets on overconfident model outputs. A model probability of 70% becomes 64.5% after shrinkage, which results in smaller, more conservative bet sizing. This single adjustment has been one of the most impactful improvements to long-term profitability.
Bayesian Probability Calibration
Raw simulation probabilities are systematically overconfident. When a Monte Carlo engine says a team has a 70% chance of winning, historical analysis shows they win closer to 64% of the time. This is a universal problem in sports modeling: models see the data they are given but cannot fully account for the unknown unknowns (unreported injuries, locker room issues, travel fatigue, referee assignments).
Bayesian calibration corrects this by using the platform's own historical prediction data as a prior. The system maintains a database of every prediction it has ever made alongside the actual outcome. This data is grouped into probability bins (50-55%, 55-60%, 60-65%, etc.) and the actual win rate within each bin is computed. The calibration function then maps raw model probabilities to historically observed frequencies.
This creates a self-correcting feedback loop. If the model has been overconfident in the 65-70% range for NHL games, future NHL probabilities in that range are adjusted downward. If the model has been well-calibrated for NBA spreads, those probabilities pass through with minimal adjustment. The calibration is league-specific and bet-type-specific, so corrections are applied with surgical precision rather than as a blanket adjustment.
Calibration Update Cycle
Calibration data is updated daily after results are resolved. The system runs an automated resolution pipeline each morning that matches predictions to outcomes, updates the calibration tables, and recalculates the adjustment curves. This means the model is always incorporating its most recent performance data. A bad week immediately starts correcting the probabilities that produced those losses.
Real-Time Odds Integration
A model probability is only useful when compared to the market's implied probability. If the model says a team has a 58% chance of winning but the sportsbook is offering odds that imply a 60% probability, there is no edge despite the model being bullish. Edge exists only when the model's probability exceeds the market's implied probability by a meaningful margin.
Olympus Bets ingests live odds from major sportsbooks multiple times per day. Each pick's edge is calculated as the difference between the model's calibrated probability and the market's implied probability. Only picks that clear minimum edge thresholds are surfaced as recommendations. These thresholds are not static; they are set by the regime calibration system based on current market conditions and recent model performance.
Edge Calculation
For a moneyline bet, the edge is: Edge = Model Probability - Market Implied Probability. For a spread bet, the edge is calculated from the simulated margin distribution: what percentage of the 10,000+ simulations resulted in the team covering the spread, compared to the implied probability from the market odds (typically -110, implying ~52.4%).
The system also tracks closing line value (CLV): how the odds move between when the pick is published and when the game starts. Consistently beating the closing line is one of the strongest indicators that a model is identifying real edges rather than getting lucky.
Self-Learning Systems
Static models decay over time as markets adapt and team compositions change. Olympus Bets operates several self-learning subsystems that continuously recalibrate based on real performance data.
Regime Calibration
The regime calibrator analyzes rolling windows of model performance to detect shifts in accuracy. If spread picks in the NBA have been underperforming for the past two weeks, the system automatically tightens the edge threshold required for NBA spread picks to qualify. Conversely, if college basketball totals have been hitting at an elevated rate, the system may relax thresholds to capture more of that edge. This prevents the system from rigidly applying stale parameters during streaks in either direction.
Profitability Zone Analysis
The zone analyzer examines performance across 15 dimensions: league, bet type, home/away, favorite/underdog, edge bucket, probability bucket, time of day, day of week, and more. Each combination of dimensions forms a "zone," and each zone is tagged as GREEN (profitable), YELLOW (neutral), or RED (unprofitable) based on statistical significance testing. RED zones are automatically excluded from premium recommendations. This surgical approach replaces blanket league-level blocking with granular sub-niche gating.
Cyclical Intelligence
The cyclical intelligence engine tracks performance patterns across day-of-week, time-of-day, and other temporal dimensions. If Monday night NBA totals have been consistently overshooting, that pattern is incorporated into the confidence scoring. These cyclical adjustments are small (typically 2-5% modifications to confidence) but compound over hundreds of picks.
NHL Hybrid FOLLOW/FADE Strategy
The NHL module uses a hybrid signal system that determines whether to follow or fade the model's raw recommendations based on contextual factors. Away underdogs with very high model edge are faded (the recommendation is reversed), while home favorites with moderate edge are followed. This counterintuitive system was developed from historical analysis showing that the highest-edge NHL picks systematically underperformed, suggesting the model was overreacting to variance.
Data Sources
The quality of a model's output is bounded by the quality of its inputs. Olympus Bets uses exclusively verified, real-time data sources. No data is estimated, synthesized, or approximated. If a data point is unavailable, the system either uses a conservative fallback or skips the pick entirely.
- Odds — The Odds API (real-time sportsbook odds from major US books)
- Schedules and Scores — ESPN API (authoritative game times, scores, records)
- Player Statistics — League-specific APIs and data providers (NBA Stats, NHL stats, EvanMiya for CBB)
- Injuries — Multi-source aggregation (ESPN, RotoWire, team reports)
- Advanced Metrics — MoneyPuck (NHL xG/Corsi), FBref (Soccer xG), EvanMiya BPR (CBB), Statcast (MLB)
Data freshness is critical. The platform runs automated health checks every hour to verify that data files are current. If any data source becomes stale beyond its expected update interval, the system either triggers a refresh or flags the affected picks for manual review.
Quality Gates
Not every simulation output becomes a recommendation. Picks must pass through multiple quality gates before reaching the premium tier:
- Minimum edge threshold — The calibrated model probability must exceed the market implied probability by a league-specific minimum, typically 3-8 percentage points.
- Minimum simulation confidence — The win probability or cover probability must meet a minimum threshold (varies by league and bet type).
- Profitability zone check — The pick's zone (league + bet type + contextual factors) must not be tagged RED in the zone analyzer.
- Regime gate — The pick must pass the current regime calibrator's threshold, which adapts based on recent performance.
- Data freshness check — Underlying data (odds, injuries, player profiles) must be current within expected intervals.
- Game start filter — Games that have already started are automatically excluded, preventing stale pick publication.
These gates operate independently. A pick with a large edge will still be blocked if it falls in a RED profitability zone. A pick in a GREEN zone will still be blocked if it does not meet the minimum edge threshold. This layered approach ensures that only picks meeting multiple independent criteria reach the recommendation stage.