squiggle.c/README.md

282 lines
15 KiB
Markdown
Raw Normal View History

2024-02-01 19:24:44 +00:00
# squiggle.c
2023-07-16 19:52:24 +00:00
2024-02-01 19:24:44 +00:00
squiggle.c is a self-contained C99 library that provides functions for simple Monte Carlo estimation, inspired by [Squiggle](https://www.squiggle-language.com/).
2023-07-16 19:52:24 +00:00
2024-02-01 19:24:44 +00:00
## Motivation
### What am I trying to do here?
- I am trying to build a reliable alternative to the original squiggle, that works for me and addresses my frustrations with it.
- Some adjectives: [grug brain](https://grugbrain.dev/), [lindy](https://en.wikipedia.org/wiki/Lindy_effect), [suckless](https://suckless.org/)
- I am trying to make something that is simple enough that I and others can fully understand. squiggle.c is less than 700 lines of C, with a core of <230 lines. You, a somewhat technically sophisticated reader, could just read it and grasp its contents, and is encouraged to do so.
### Why C?
2023-07-16 19:52:24 +00:00
- Because it is fast
- Because I enjoy it
- Because C is honest
2024-02-01 19:24:44 +00:00
- Because it will last long
2024-01-20 20:08:53 +00:00
- Because it can be made faster if need be, e.g., with a multi-threading library like OpenMP, by implementing faster but more complex algorithms, or more simply, by inlining the sampling functions (adding an `inline` directive before their function declaration)
- **Because there are few abstractions between it and machine code** (C => assembly => machine code with gcc, or C => machine code, with tcc), leading to fewer errors beyond the programmer's control.
2024-02-01 19:24:44 +00:00
- Because it can fit in my head
- Because if you can implement something in C, you can implement it anywhere else
2023-07-24 09:25:16 +00:00
2023-07-23 21:29:57 +00:00
### Design choices
2023-07-24 09:22:40 +00:00
This code should aim to be correct, then simple, then fast.
2023-07-23 10:44:16 +00:00
- It should be correct. The user should be able to rely on it and not think about whether errors come from the library.
2024-02-01 19:24:44 +00:00
- Nonetheless, the user should understand the limitations of sampling-based methods. See the section on [Tests and the long tail of the lognormal](https://git.nunosempere.com/personal/squiggle.c/FOLK_WISDOM.md#tests-and-the-long-tail-of-the-lognormal) for a discussion of how sampling is bad at capturing some aspects of distributions with long tails.
- It should be clear, conceptually simple. Simple for me to implement, simple for others to understand.
2024-02-01 19:24:44 +00:00
- It should be fast. But when speed conflicts with simplicity, choose simplicity. For example, there might be several possible algorithms to sample a distribution, each of which is faster over part of the domain. In that case, it's conceptually simpler to just pick one algorithm, and pay the—normally small—performance penalty.
- In any case, though, the code should still be *way faster* than, say, Python.
2023-07-23 10:44:16 +00:00
Note that being terse, or avoiding verbosity, is a non-goal. This is in part because of the constraints that C imposes. But it also aids with clarity and conceptual simplicity, as the issue of correlated samples illustrates in the next section.
2024-02-01 19:24:44 +00:00
Caveats: Parallelism might hide monsters. The histogram function is pretty but was created with the aid of GPT, and so might have errors.
2023-07-23 09:29:17 +00:00
2024-02-01 19:24:44 +00:00
## Getting started
2023-07-23 10:44:16 +00:00
2024-02-01 19:24:44 +00:00
### Example upfront
2023-07-23 10:44:16 +00:00
2024-02-01 19:24:44 +00:00
Download squiggle.c, for instance:
2023-07-23 10:44:16 +00:00
2024-02-01 19:24:44 +00:00
```sh
$ rm -r -f squiggle_c
$ wget https://git.nunosempere.com/personal/squiggle.c/raw/branch/master/squiggle.c
$ wget https://git.nunosempere.com/personal/squiggle.c/raw/branch/master/squiggle.h
$ wget https://git.nunosempere.com/personal/squiggle.c/raw/branch/master/squiggle_more.c
$ wget https://git.nunosempere.com/personal/squiggle.c/raw/branch/master/squiggle_more.h
$ mkdir temp
$ mv squiggle* temp
$ mv temp squiggle_c
2023-07-23 10:44:16 +00:00
```
2024-02-01 19:24:44 +00:00
Write your model. The below is an overcomplicated example, because it has a long digression in the middle, which I think is funny. May change:
2024-02-01 19:24:44 +00:00
```C
#include "squiggle_c/squiggle.h"
#include <math.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
2024-02-01 19:24:44 +00:00
double sample_fermi_logspace(uint64_t * seed)
{
// Replicate <https://arxiv.org/pdf/1806.02404.pdf>, and in particular the red line in page 11.
// You can see a simple version of this function in naive.c in this same folder
double log_rate_of_star_formation = sample_uniform(log(1), log(100), seed);
double log_fraction_of_stars_with_planets = sample_uniform(log(0.1), log(1), seed);
double log_number_of_habitable_planets_per_star_system = sample_uniform(log(0.1), log(1), seed);
double log_rate_of_life_formation_in_habitable_planets = sample_normal(1, 50, seed);
double log_fraction_of_habitable_planets_in_which_any_life_appears;
/*
Consider:
a = underlying normal
b = rate_of_life_formation_in_habitable_planets = exp(underlying normal) = exp(a)
c = 1 - exp(-b) = fraction_of_habitable_planets_in_which_any_life_appears
d = log(c)
Looking at the Taylor expansion for c = 1 - exp(-b), it's
b - b^2/2 + b^3/6 - x^b/24, etc.
<https://www.wolframalpha.com/input?i=1-exp%28-x%29>
When b ~ 0 (as is often the case), this is close to b.
But now, if b ~ 0, c ~ b
and d = log(c) ~ log(b) = log(exp(a)) = a
Now, we could play around with estimating errors,
and indeed if we want b^2/2 = exp(a)^2/2 < 10^(-n), i.e., to have n decimal digits of precision,
we could compute this as e.g., a < (nlog(10) + log(2))/2
so for example if we want ten digits of precision, that's a < -11
Empirically, the two numbers as calculated in C do become really close around 11 or so,
and at 38 that calculation results in a -inf (so probably a floating point error or similar.)
So we should be using that formula for somewhere between -38 << a < -11
I chose -16 as a happy medium after playing around with
double invert(double x){
return log(1-exp(-exp(-x)));
}
for(int i=0; i<64; i++){
double j = i;
printf("for %lf, log(1-exp(-exp(-x))) is calculated as... %lf\n", j, invert(j));
}
and <https://www.wolframalpha.com/input?i=log%281-exp%28-exp%28-16%29%29%29>
*/
if (log_rate_of_life_formation_in_habitable_planets < -16) {
log_fraction_of_habitable_planets_in_which_any_life_appears = log_rate_of_life_formation_in_habitable_planets;
} else {
double rate_of_life_formation_in_habitable_planets = exp(log_rate_of_life_formation_in_habitable_planets);
double fraction_of_habitable_planets_in_which_any_life_appears = -expm1(-rate_of_life_formation_in_habitable_planets);
log_fraction_of_habitable_planets_in_which_any_life_appears = log(fraction_of_habitable_planets_in_which_any_life_appears);
}
double log_fraction_of_planets_with_life_in_which_intelligent_life_appears = sample_uniform(log(0.001), log(1), seed);
double log_fraction_of_intelligent_planets_which_are_detectable_as_such = sample_uniform(log(0.01), log(1), seed);
double log_longevity_of_detectable_civilizations = sample_uniform(log(100), log(10000000000), seed);
double log_n =
log_rate_of_star_formation +
log_fraction_of_stars_with_planets +
log_number_of_habitable_planets_per_star_system +
log_fraction_of_habitable_planets_in_which_any_life_appears +
log_fraction_of_planets_with_life_in_which_intelligent_life_appears +
log_fraction_of_intelligent_planets_which_are_detectable_as_such +
log_longevity_of_detectable_civilizations;
return log_n;
}
2024-02-01 19:24:44 +00:00
double sample_are_we_alone_logspace(uint64_t * seed)
{
double log_n = sample_fermi_logspace(seed);
return ((log_n > 0) ? 1 : 0);
// log_n > 0 => n > 1
}
2023-11-30 00:00:11 +00:00
2024-02-01 19:24:44 +00:00
int main()
{
2023-12-03 18:15:27 +00:00
2024-02-01 19:24:44 +00:00
// set randomness seed
uint64_t* seed = malloc(sizeof(uint64_t));
*seed = 1010101; // xorshift can't start with a seed of 0
// Note: come back to choice of seed.
double logspace_fermi_proportion = 0;
int n_samples = 1000 * 1000;
for (int i = 0; i < n_samples; i++) {
double result = sample_are_we_alone_logspace(seed);
logspace_fermi_proportion += result;
}
double p_not_alone = logspace_fermi_proportion / n_samples;
printf("Probability that we are not alone: %lf (%.lf%%)\n", p_not_alone, p_not_alone * 100);
2023-12-03 18:15:27 +00:00
2024-02-01 19:24:44 +00:00
free(seed);
2023-12-03 18:15:27 +00:00
}
```
2024-02-01 19:24:44 +00:00
Compile and run:
2023-12-03 18:15:27 +00:00
```
2024-02-01 19:24:44 +00:00
$ gcc -O3 samples.c ./squiggle_c/squiggle.c ./squiggle_c/squiggle_more.c -lm -fopenmp -o samples
$ ./samples
```
2024-02-01 19:24:44 +00:00
### Core strategy
2024-02-01 19:24:44 +00:00
The recommended strategy is to:
2023-07-23 21:29:57 +00:00
2024-02-01 19:24:44 +00:00
1. Define sampler functions, which take a seed, and return 1 sample
2. Compose those sampler functions to define your estimation model
3. Produce an array of samples from a sampler function
4. Get summary statistics for that array of samples.
2023-08-02 17:56:34 +00:00
2024-02-01 19:24:44 +00:00
### Examples
2023-12-09 18:04:55 +00:00
2024-02-01 19:24:44 +00:00
You can follow some example usage in the [examples/](examples]) folder. In [examples/core](examples/core/), we build up some functionality, starting from drawing one sample and finishing with a replication of [Dissolving the Fermi paradox](https://arxiv.org/abs/1806.02404). In [examples/more](examples/more), we present a few more complicated examples, like finding confidence intervals, a model of nuclear war, an estimate of how much exercise to do to lose 10kg, or an example using parallelism.
2023-12-09 18:04:55 +00:00
2024-02-01 19:24:44 +00:00
## Guarantees
2023-12-09 18:59:53 +00:00
2024-02-01 19:24:44 +00:00
The motte:
2023-12-09 18:59:53 +00:00
2024-02-01 19:24:44 +00:00
- I offer no guarantees about stability, correctness, performance, etc. I might, for instance, abandon the version in C and rewrite it in Zig, Nim, Rust, Go.
- This project mostly exists for my own usage & for my own amusement.
- Caution! Think carefully before using this project for anything important.
- If you wanted to pay me to provide some stability or correctness, guarantees, or to tweak this library for your own usage, or to teach you how to use it, you could do so [here](https://nunosempere.com/consulting).
- I am conflicted about parallelism. It *does* add more complexity, complexity that you can be bitten by if you are not careful and don't understand it. And this conflicts with the initial grug-brain motivation. At the same time, it is clever, and it is nice, and I like it a lot.
2023-12-09 18:04:55 +00:00
2024-02-01 19:24:44 +00:00
The bailey:
2023-08-19 17:22:49 +00:00
2024-02-01 19:24:44 +00:00
- You can vendor the code, i.e., save it as a dependency together with your other files. This way, this renders you immune to any changes I may make.
- I've been hacking at this project for a while now, and I think I have a good grasp of its correctness and limitations. I've tried Nim and Zig, and I prefer C so far.
- I think the core interface is not likely to change much, though I've recently changed the interface for parallelism and for getting confidence intervals.
- I am using this code for a few important consulting projects, and I trust myself to operate it correctly.
2023-08-19 17:22:49 +00:00
2024-02-01 19:24:44 +00:00
## Licensing
2023-08-19 17:22:49 +00:00
2024-02-01 19:24:44 +00:00
This project is released under the MIT license, a permissive open-source license. You can see it in the LICENSE.txt file.
2023-11-30 00:00:11 +00:00
2024-02-01 19:24:44 +00:00
## squiggle extra functions
2023-08-19 17:22:49 +00:00
2023-12-03 18:15:27 +00:00
### Division between core functions and squiggle_more expansions
2023-11-19 22:28:38 +00:00
This library differentiates between core functions, which are pretty tightly scoped, and expansions and convenience functions, which are more meandering. Expansions are in `squiggle_more.c` and `squiggle_more.h`. To use them, take care to link them:
```
// In your C source file
2023-11-19 22:28:38 +00:00
#include "squiggle_more.h"
```
```
# When compiling:
2023-11-19 22:28:38 +00:00
gcc -std=c99 -Wall -O3 example.c squiggle.c squiggle_more.c -lm -o ./example
```
2024-01-31 08:49:21 +00:00
#### Extra: summary statistics
2023-11-30 00:00:11 +00:00
2023-12-03 18:15:27 +00:00
`squiggle_more.c` has some helper functions to get confidence intervals. They are in `squiggle_more.c` because I'm still mulling over what their shape should be, and because until recently they were pretty limited and sub-optimal. But recently, I've put a bunch of effort into being able to get the confidence interval of an array of samples in O(number of samples), and into making confidence interval functions nicer and more general. So I might promote them to the main `squiggle.c` file.
2023-11-30 00:00:11 +00:00
2024-01-31 08:49:21 +00:00
The relevant functions are
```
typedef struct ci_t {
double low;
double high;
} ci;
double array_get_median(double xs[], int n);
ci array_get_ci(ci interval, double* xs, int n);
ci array_get_90_ci(double xs[], int n);
void array_print_stats(double xs[], int n);
void array_print_histogram(double* xs, int n_samples, int n_bins);
```
2023-12-03 18:15:27 +00:00
#### Extra: parallelism
2023-11-30 00:00:11 +00:00
I provide some functions to draw samples in parallel. For "normal" squiggle.c models, where you define one model and then draw samples from it once at the end, they should be fine.
2023-12-03 18:15:27 +00:00
But for more complicated use cases, my recommendation would be to not use parallelism unless you know what you are doing, because of intricacies around setting seeds. Some gotchas and exercises for the reader:
2023-11-30 00:00:11 +00:00
- If you run the `sampler_parallel` function twice, you will get the same result. Why?
- If you run the `sampler_parallel` function on two different inputs, their outputs will be correlated. E.g., if you run two lognormals, indices which have higher samples in one will tend to have higher samples in the other one. Why?
- For a small amount of samples, if you run the `sampler_parallel` function, you will get better spread out random numbers than if you run things serially. Why?
2024-01-13 12:01:58 +00:00
That said, I found adding parallelism to be an interesting an engaging task. Most recently, I even optimized the code to ensure that two threads weren't accessing the same cache line at the same time, and it was very satisfying to see a 30% improvement as a result.
2023-12-03 18:56:10 +00:00
#### Extra: Algebraic manipulations
`squiggle_more.c` has some functions to do some simple algebra manipulations: sums of normals and products of lognormals. You can see some example usage [here](examples/more/07_algebra/example.c) and [here](examples/more/08_algebra_and_conversion/example.c).
2023-12-03 18:15:27 +00:00
#### Extra: cdf auxiliary functions
2023-11-30 00:00:11 +00:00
I provide some auxiliary functions that take a cdf, and return a sample from the distribution produced by that cdf. This might make it easier to program models, at the cost of a 20x to 60x slowdown in the parts of the code that use it.
2024-01-31 08:49:21 +00:00
I don't have any immediate plans or reason to delete these functions, but I probably will, since they are too slow/inefficient for my taste.
#### Extra: Error propagation vs exiting on error
The process of taking a cdf and returning a sample might fail, e.g., it's a Newton method which might fail to converge because of cdf artifacts. The cdf itself might also fail, e.g., if a distribution only accepts a range of parameters, but is fed parameters outside that range.
This library provides two approaches:
2023-12-03 18:15:27 +00:00
1. Print the line and function in which the error occurred, then exit on error
2. In situations where there might be an error, return a struct containing either the correct value or an error message:
```C
struct box {
int empty;
double content;
char* error_msg;
};
```
2023-12-03 18:15:27 +00:00
The first approach produces terser programs but might not scale. The second approach seems like it could lead to more robust programs, but is more verbose.
Behaviour on error can be toggled by the `EXIT_ON_ERROR` variable. This library also provides a convenient macro, `PROCESS_ERROR`, to make error handling in either case much terser—see the usage in example 4 in the examples/ folder.
Overall, I'd describe the error handling capabilities of this library as pretty rudimentary. For example, this program might fail in surprising ways if you ask for a lognormal with negative standard deviation, because I haven't added error checking for that case yet.