Forecast accuracy · public scorecard

How accurate is the model?

We rate every race, every day. To keep us honest, this page is automatically generated from held-out evaluations — races the model never saw during training. We do not describe how the model works on this page. We describe how well it performs.

Average miss
2.21 pt
median absolute error · margin
Right winner
90.8%
fraction of held-out races where the called winner won
High-confidence calls
100.0%
races we rated ≥80% — accuracy (n=53)
Calibration gap
5.3 pt
expected vs actual · lower is better
In short

On average, our predicted final margin is off by about 2 points. We call the right winner roughly 5 in 6 times overall, and 20 in 20 times when we say a race is likely or safe. When we say "60% chance Democrat wins", that race breaks D about 60% of the time — though we're a little overconfident on the very tightest races, and we're working on it.

Has it been consistent?

Per-cycle accuracy

Mean absolute error on each historical cycle's held-out races (no peeking).

2468101214202011.472022720245.18MEAN ABSOLUTE ERROR (PT)
Does it improve as election day approaches?

Accuracy by horizon

24681060–908.0330–607.987–307.93666666666666650–77.920000000000001
Performance by data quality

The model knows what it doesn't know

high quality
2.2 pt
average miss
90.8% called correctly · n=65
medium quality
4.8 pt
average miss
92.7% called correctly · n=1142