Fetch forecasts from prediction markets/forecasting platforms to make them searchable. Integrate these forecasts into other services. https://metaforecast.org/
Go to file
Vyacheslav Matyukhin 1ec6f34908
fetchers-v3 WIP
2022-06-03 20:05:02 +03:00
contrib tweak: Add code to get forecasts first fetched today 2022-05-26 08:03:58 -04:00
docs feat: question pages; various refactorings 2022-04-26 00:46:42 +04:00
input cleanup: remove unused data/ and input/ files 2022-03-24 15:47:16 +03:00
prisma feat: fetchers-v3 (WIP) 2022-06-03 20:05:00 +03:00
public feat: next/image on /tools, update capture screenshot 2022-05-09 18:25:29 +04:00
src fetchers-v3 WIP 2022-06-03 20:05:02 +03:00
tf ops: www domain 2022-04-12 12:57:36 +03:00
.eslintrc tweak: Charts 2022-04-28 15:22:22 -04:00
.gitignore feat: new db structure; platform labels 2022-04-01 23:24:35 +03:00
.nvmrc refactor: monorepo 2022-03-22 03:30:25 +03:00
codegen.yml feat: graphql introspection, timestamps, custom scalar skaffolding 2022-04-20 02:50:19 +04:00
env.example feat: get rid of NEXT_PUBLIC_SITE_URL 2022-04-12 00:10:07 +03:00
graphql.config.yaml feat: rewrite frontend with graphql (WIP) 2022-04-19 01:15:42 +03:00
next-env.d.ts refactor: monorepo 2022-03-22 03:30:25 +03:00
package-lock.json feat: better question page layout (WIP) 2022-06-01 00:20:29 +03:00
package.json feat: better question page layout (WIP) 2022-06-01 00:20:29 +03:00
postcss.config.js refactor: monorepo 2022-03-22 03:30:25 +03:00
Procfile refactor: monorepo 2022-03-22 03:30:25 +03:00
README.md Update link to the functions that show how stars are calculated 2022-05-22 11:02:07 +02:00
schema.graphql refactor: options code; also gql-gen & new timestamps 2022-05-26 16:28:50 +04:00
specification.json feat: code reorganization 2022-02-11 12:19:33 -05:00
tailwind.config.js feat: legend improvements 2022-05-06 23:51:26 +04:00
tsconfig.json refactor: strict typescript 2022-05-11 01:45:02 +04:00

What this is

Metaforecast is a search engine for probabilities from various prediction markes and forecasting platforms. Try searching "Trump", "China" or "Semiconductors".

This repository includes the source code for both the website and the library that fetches forecasts needed to replace them. We also aim to provide tooling to integrate metaforecast with other services.

How to run

1. Download this repository

$ git clone https://github.com/QURIresearch/metaforecast
$ cd metaforecasts
$ npm install

2. Set up a database and environment variables

You'll need a PostgreSQL instance, either local (see https://www.postgresql.org/download/) or in the cloud (for example, you can spin one up on https://www.digitalocean.com/products/managed-databases-postgresql or https://supabase.com/).

Environment can be set up with an .env file. You'll need to configure at least DIGITALOCEAN_POSTGRES.

See ./docs/configuration.md for details.

3. Actually run

npm run cli starts a local CLI which presents the user with choices. If you would like to skip that step, use the option name instead, e.g., npm run cli wildeford.

npm run next-dev starts a Next.js dev server with the website on http://localhost:3000.

4. Example: download the metaforecasts database

$ git clone https://github.com/QURIresearch/metaforecast
$ cd metaforecasts
$ npm install
$ node src/backend/manual/manualDownload.js

Integrations

Metaforecast has been integrated into:

  • Twitter, using our @metaforecast bot
  • Global Guessing, which integrates our dashboards
  • Fletcher, a popular Discord bot. You can invoke metaforecast with !metaforecast search-term
  • Elicit, which uses GPT-3 to deliver vastly superior semantic search (as opposed to fuzzy word matching). If you have access to the Elicit IDE, you can use the action "Search Metaforecast database. This is not being updated regularly.

We also provide a public database, which can be accessed with a script similar to this one. We are also open to integrating our Algolia search instance with other trusted services (in addition to Fletcher.)

In general, if you want to integrate metaforecast into your service, we want to hear from you.

Code layout

  • frontend code is in src/pages/, src/web/ and in a few other places which are required by Next.js (e.g. root-level configs in postcss.config.js and tailwind.config.js)
  • various backend code is in src/backend/
  • fetching libraries for various platforms is in src/backend/platforms/
  • rudimentary documentation is in docs/

What are "stars" and how are they computed

Star ratings—e.g. ★★★☆☆—are an indicator of the quality of an aggregate forecast for a question. These ratings currently try to reflect my own best judgment and the best judgment of forecasting experts I've asked, based on our collective experience forecasting on these platforms. Thus, stars have a strong subjective component which could be formalized and refined in the future. You can see the code used to decide how many stars a forecast should get by looking at the function calculateStars() in the files for every platform here.

With regards the quality, I am most uncertain about Smarkets, Hypermind, Ladbrokes and WilliamHill, as I haven't used them as much. Also note that, whatever other redeeming features they might have, prediction markets rarely go above 95% or below 5%.

Tech stack

Overall, the services which we use are:

  • Algolia for search
  • Netlify for website deployment
  • Heroku for background jobs, e.g. fetching new forecasts
  • Postgres on DigitalOcean for database

Various notes

  • Commits follow conventional commits
  • For elicit and metaculus, this library currently filters out questions with <10 predictions.