Increase clarity of README and fix typos

This commit is contained in:
NunoSempere 2023-12-11 11:08:16 +00:00
parent c480565051
commit 95cedff3ab

View File

@ -61,7 +61,7 @@ The motte:
The bailey: The bailey:
- I've been hacking at this project for a while now, and I think I have a good grasp of its correctness and limitations. I've tried Nim and Zig, and I prefer C so far. - I've been hacking at this project for a while now, and I think I have a good grasp of its correctness and limitations. I've tried Nim and Zig, and I prefer C so far.
- I think the core interface is not likely to change much stable, though I've been changing the interface for parallelism and for getting confidence intervals. - I think the core interface is not likely to change much, though I've recently changed the interface for parallelism and for getting confidence intervals.
- I am using this code for a few important consulting projects, and I trust myself to operate it correctly. - I am using this code for a few important consulting projects, and I trust myself to operate it correctly.
This project is released under the MIT license, a permissive open-source license. You can see it in the LICENSE.txt file. This project is released under the MIT license, a permissive open-source license. You can see it in the LICENSE.txt file.
@ -103,7 +103,7 @@ c_samples = sq.sample(c, 10)
print(c_samples) print(c_samples)
``` ```
Should `c` be equal to `2`? or should it be equal to 2 times the expected distribution of the ratio of two independent draws from a (`2 * a/a`, as it were)? Should `c` be equal to `2`? or should it be equal to 2 times the expected distribution of the ratio of two independent draws from a (`2 * a/a`, as it were)? You don't know, because you are not operating on samples, you are operating on magical objects whose internals are hidden from you.
In squiggle.c, this ambiguity doesn't exist, at the cost of much greater overhead & verbosity: In squiggle.c, this ambiguity doesn't exist, at the cost of much greater overhead & verbosity:
@ -175,9 +175,9 @@ Hint: See examples/more/13_parallelize_min
### Note on sampling strategies ### Note on sampling strategies
Right now, I am drawing samples from a random number generator. It requires some finesse, particularly when using parallelism. But it works fine. Right now, I am drawing samples using a random number generator. It requires some finesse, particularly when using parallelism. But it works fine.
But..., what if we could do something more elegant, more ingenious. In particular, what if instead of drawing samples, we had a mesh of equally spaced points in the range of floats. Then we could, for a given number of samples, better estimate the, say, mean of the distribution we are trying to model... But..., what if we could do something more elegant, more ingenious? In particular, what if instead of drawing samples, we had a mesh of equally spaced points in the range of floats? Then we could, for a given number of samples, better estimate the, say, mean of the distribution we are trying to model...
The problem with that is that if we have some code like: The problem with that is that if we have some code like:
@ -195,7 +195,7 @@ Then this doesn't work, because the values of a and b will be correlated: when a
```C ```C
double* model(int n_samples){ double* model(int n_samples){
double* xs = malloc(n_samples); double* xs = malloc((size_t)n_samples * sizeof(double));
for(int i_mesh=0; i_mesh < sqrt(n_samples); i_mesh++){ for(int i_mesh=0; i_mesh < sqrt(n_samples); i_mesh++){
for(int j_mesh=0; j_mesh < sqrt(n_samples); j_mesh++){ for(int j_mesh=0; j_mesh < sqrt(n_samples); j_mesh++){
double a = sample_to(1, 10, i_mesh); double a = sample_to(1, 10, i_mesh);
@ -207,7 +207,7 @@ double* model(int n_samples){
``` ```
But that requires us to encode the shape of the model into the sampling function. It leads to an ugly nesting of for loops. It is a more complex approach. It is not [grug-brained](https://grugbrain.dev/). So every now and then I have to remind myself that this is not the way. But that requires us to encode the shape of the model into the sampling function. It leads to an ugly nesting of for loops. It is a more complex approach. It is not [grug-brained](https://grugbrain.dev/). So every now and then I have to remember that this is not the way.
### Tests and the long tail of the lognormal ### Tests and the long tail of the lognormal
@ -367,7 +367,7 @@ Overall, I'd describe the error handling capabilities of this library as pretty
### Other gotchas ### Other gotchas
- Even though the C standard is ambiguous about this, this code assumes that doubles are 64 bit precision (otherwise the xorshift should be different). - Even though the C standard is ambiguous about this, this code assumes that doubles are 64 bit precision (otherwise the xorshift code should be different).
## Related projects ## Related projects