Update link to the functions that show how stars are calculated

- now links to the folder where all the files for the individual platforms are located and explains how to find the code
- haven't checked the link works, so maybe needs a direct link to https://github.com/quantified-uncertainty/metaforecast/tree/master/src/backend/platforms
This commit is contained in:
Nikos Bosse 2022-05-22 11:02:07 +02:00 committed by GitHub
parent 40c6f57c11
commit 5c65cdf2f7
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -63,7 +63,7 @@ In general, if you want to integrate metaforecast into your service, we want to
## What are "stars" and how are they computed
Star ratings—e.g. ★★★☆☆—are an indicator of the quality of an aggregate forecast for a question. These ratings currently try to reflect my own best judgment and the best judgment of forecasting experts I've asked, based on our collective experience forecasting on these platforms. Thus, stars have a strong subjective component which could be formalized and refined in the future. You can see the code used to decide how many stars to assign [here](./src/backend/utils/stars.js).
Star ratings—e.g. ★★★☆☆—are an indicator of the quality of an aggregate forecast for a question. These ratings currently try to reflect my own best judgment and the best judgment of forecasting experts I've asked, based on our collective experience forecasting on these platforms. Thus, stars have a strong subjective component which could be formalized and refined in the future. You can see the code used to decide how many stars a forecast should get by looking at the function `calculateStars()` in the files for every platform [here](./src/backend/platforms).
With regards the quality, I am most uncertain about Smarkets, Hypermind, Ladbrokes and WilliamHill, as I haven't used them as much. Also note that, whatever other redeeming features they might have, prediction markets rarely go above 95% or below 5%.