by pjt
Last Updated August 29, 2018 08:19 AM

I am trying to find the best way to test the accuracy of a predictive model for the outcome of a game between two random teams. The model presents a chance for each game that team A will win (win chance). I have gathered almost 3500 samples comparing the win chance with the actual outcome.

As a crude measure I broke down the win chances into bins of 5% increments (or larger where sample size was small). Within each bin I compared the average win chance with the percentage of games that were won.

Here how I organized my data so far:

- "bracket" is the same as bin
- "low-high" is the win chance range for each bin, e.g 20% to 24% chance team A will win, etc
- "games" is number of data points or samples in each bin
- "win loss draw" are the possible outcomes of each sample. I'm mainly focused on the wins
- "my win%" are the observed results of the samples in each bin
- "predicted XVM" is the average win chance of all data points within each bin.

At first glance I would say it's quite accurate, however I would like to have some concrete method or values to show how accurate it is. I'm guessing breaking it down into bins would be almost like setting confidence (credible?) intervals. How else to approach this?

- ServerfaultXchanger
- SuperuserXchanger
- UbuntuXchanger
- WebappsXchanger
- WebmastersXchanger
- ProgrammersXchanger
- DbaXchanger
- DrupalXchanger
- WordpressXchanger
- MagentoXchanger
- JoomlaXchanger
- AndroidXchanger
- AppleXchanger
- GameXchanger
- GamingXchanger
- BlenderXchanger
- UxXchanger
- CookingXchanger
- PhotoXchanger
- StatsXchanger
- MathXchanger
- DiyXchanger
- GisXchanger
- TexXchanger
- MetaXchanger
- ElectronicsXchanger
- StackoverflowXchanger
- BitcoinXchanger
- EthereumXcanger