From 106a5bb3a334817e40e3d1b08aefcbf75c545f5a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Nu=C3=B1o=20Sempere?= Date: Mon, 6 May 2019 16:08:14 +0200 Subject: [PATCH] Update Self-experimentation-calibration.md --- rat/Self-experimentation-calibration.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/rat/Self-experimentation-calibration.md b/rat/Self-experimentation-calibration.md index 0f9c629..dceb1eb 100644 --- a/rat/Self-experimentation-calibration.md +++ b/rat/Self-experimentation-calibration.md @@ -231,4 +231,4 @@ If I were to redo this experiment, I'd: ## 4. Conclusion -In conclusion, I am surprised that the dumb model beats the others most of the time, though I think this might be explained by the combination of not having that much data and of having a lot of variables: the random errors in my regression are large. I see that I am in general well calibrated (in the particular domain analyzed here) but with room for improvement when giving 1:5, 1:6, and 1:15 odds. +In conclusion, I am surprised that the dumb model beats the others most of the time, though I think this might be explained by the combination of not having that much data and of having a lot of variables: the random errors in my regression are large. I see that I am in general well calibrated (in the particular domain under analysis) but with room for improvement when giving 1:5, 1:6, and 1:15 odds.