Reading AI Predictions

AI predictions are not magic, and they are not fraudulent -- they are pattern recognition with documented failure modes. This hub explains what goes into a prediction: feature inputs, how importance is assigned, what context the model can and cannot see, and the specific conditions under which you should question or override the output. It also addresses the elephant in the room: inflated accuracy claims. Our validated range is 66-83%. Anyone claiming 90%+ is selling something.

Featured articles

FAQ

  • Do explanations make predictions certain?

    No. Explanations improve transparency but do not remove uncertainty. Every prediction carries a confidence range.

  • When should I ignore the model?

    When high-impact context arrives after the model was last updated: late injury news, a starting lineup change, unusual weather. The failure modes article covers this in detail.

  • Why is 66-83% honest and 95% is not?

    Our range comes from held-out test data with documented artifact sources. 95% claims are almost always backtested on the same data used for training, which inflates results.

  • Is this content sport-specific?

    The concepts apply across all sports. Sport-specific model limitations are documented in each sport prediction page.