Compare commits

...

25 Commits

Author SHA1 Message Date
829781b8a7 go: savepoint before deleting a few comments 2024-02-17 01:33:21 +01:00
bb1b21bbbb go: abstract sample_parallel into own function 2024-02-17 01:11:43 +01:00
b358c5b16a go: continue improving number of goroutines 2024-02-16 15:19:26 +01:00
aa3b406473 use different seeds for different threads 2024-02-16 15:13:21 +01:00
7c907f173d go: create type alias 2024-02-16 15:03:11 +01:00
76a73f5d13 go: add goroutines usage. But randomness still to be fixed 2024-02-16 14:48:39 +01:00
06438c522d go: add slices manually 2024-02-16 14:40:54 +01:00
14e298c3c9 go: remove prints 2024-02-16 14:17:28 +01:00
5029f67429 go: finish debugging weights code. 2024-02-16 14:15:48 +01:00
d3cb97684a go: add printfs so as to figure out weights bug 2024-02-16 14:10:10 +01:00
8ebe9487a5 go: continue working on mixture code 2024-02-16 13:58:35 +01:00
6417e0aecc add initial go mixture implementation 2024-02-16 13:52:28 +01:00
1f4eb1fec4 wrangle mixture weights in go 2024-02-16 13:43:55 +01:00
fa0065c96e wrangle go types 2024-02-16 13:43:29 +01:00
4544adb3d0 wrangle mixture syntax 2024-02-16 10:10:16 +01:00
651ade8b47 build, check initial times for go 2024-02-16 00:57:22 +01:00
bfb5c75070 add sample_to 2024-02-16 00:42:39 +01:00
c9f6e964ee continue defining simple go API 2024-02-16 00:40:02 +01:00
934c84e195 figure out return values & global var 2024-02-16 00:25:36 +01:00
5a36bec0ba initialize go program 2024-02-16 00:19:09 +01:00
1903a09e97 tweak squiggle.c makefile 2024-02-11 19:45:30 +01:00
841e4eda90 add -march=native flag to C 2024-02-11 19:43:48 +01:00
3fb6eb0c0e update squiggle version 2024-02-11 19:43:28 +01:00
54bd358f7e update time with -march=native flag 2024-02-11 19:24:00 +01:00
dd7c42d952 slight squiggle.c tweaks 2024-01-29 18:37:51 +01:00
12 changed files with 495 additions and 315 deletions

View File

@ -25,6 +25,7 @@ DEBUG= #'-g'
STANDARD=-std=c99 STANDARD=-std=c99
WARNINGS=-Wall WARNINGS=-Wall
OPTIMIZED=-O3 #-O3 actually gives better performance than -Ofast, at least for this version OPTIMIZED=-O3 #-O3 actually gives better performance than -Ofast, at least for this version
LOCAL=-march=native
OPENMP=-fopenmp OPENMP=-fopenmp
## Formatter ## Formatter
@ -33,10 +34,10 @@ FORMATTER=clang-format -i -style=$(STYLE_BLUEPRINT)
## make build ## make build
build: $(SRC) build: $(SRC)
$(CC) $(OPTIMIZED) $(DEBUG) $(SRC) $(OPENMP) $(MATH) -o $(OUTPUT) $(CC) $(OPTIMIZED) $(DEBUG) $(SRC) $(LOCAL) $(OPENMP) $(MATH) -o $(OUTPUT)
static: static:
$(CC) $(OPTIMIZED) $(DEBUG) $(SRC) $(OPENMP) $(MATH) -o $(OUTPUT) $(CC) $(OPTIMIZED) $(DEBUG) $(SRC) $(LOCAL) $(OPENMP) $(MATH) -o $(OUTPUT)
format: $(SRC) format: $(SRC)
$(FORMATTER) $(SRC) $(FORMATTER) $(SRC)

View File

@ -24,17 +24,17 @@ The name of this repository is a pun on two meanings of "time to": "how much tim
| Language | Time | Lines of code | | Language | Time | Lines of code |
|-----------------------------|-----------|---------------| |-----------------------------|-----------|---------------|
| C | 5.6ms | 252 | | C | 6.20ms | 252 |
| squiggle.c | 8.2ms | 29* | | squiggle.c | 7.20ms | 29* |
| Nim | 40.8ms | 84 | | Nim | 41.10ms | 84 |
| Lua (LuaJIT) | 69.9ms | 82 | | Lua (LuaJIT) | 68.80ms | 82 |
| OCaml (flambda) | 187.9ms | 123 | | OCaml (flambda) | 185.50ms | 123 |
| Squiggle (bun) | 0.387s | 14* | | Squiggle (bun) | 384.00ms | 14* |
| Javascript (node) | 0.445s | 69 | | Javascript (node) | 0.423s | 69 |
| SquigglePy (v0.27) | 1.507s | 18* | | SquigglePy (v0.27) | 1.542s | 18* |
| R (3.6.1) | 4.508s | 49 | | R (3.6.1) | 4.494s | 49 |
| Python 3.9 | 11.879s | 56 | | Python 3.9 | 11.909s | 56 |
| Gavin Howard's bc | 15.960s | 101 | | Gavin Howard's bc | 16.170s | 101 |
Time measurements taken with the [time](https://man7.org/linux/man-pages/man1/time.1.html) tool, using 1M samples. But different implementations use different algorithms and, occasionally, different time measuring methodologies (for the C, Nim and Lua implementations, I run the program 100 times and take the mean). Their speed was also measured under different loads in my machine. So I think that these time estimates are accurate within maybe ~2x or so. Time measurements taken with the [time](https://man7.org/linux/man-pages/man1/time.1.html) tool, using 1M samples. But different implementations use different algorithms and, occasionally, different time measuring methodologies (for the C, Nim and Lua implementations, I run the program 100 times and take the mean). Their speed was also measured under different loads in my machine. So I think that these time estimates are accurate within maybe ~2x or so.

16
go/makefile Normal file
View File

@ -0,0 +1,16 @@
dev:
go run squiggle.go
build:
go build squiggle.go
build-complex:
go build -ldflags="-s -w" squiggle.go
# https://stackoverflow.com/questions/45003259/passing-an-optimization-flag-to-a-go-compiler
run:
./squiggle
time-linux:
@echo "Running 100x and taking avg time: ./squiggle"
@t=$$(/usr/bin/time -f "%e" -p bash -c 'for i in {0..100}; do ./squiggle; done' 2>&1 >/dev/null | grep real | awk '{print $$2}' ); echo "scale=2; 1000 * $$t / 100" | bc | sed "s|^|Time using 16 threads: |" | sed 's|$$|ms|' && echo

8
go/notes.md Normal file
View File

@ -0,0 +1,8 @@
- [x] Hello world program
- [x] Look into randomness sources in go
- rand/v2 api: <https://pkg.go.dev/math/rand/v2>
- [x] Test with a million samples of a simple lognormal, just to get a sense of speed
- [x] Add mixture distribution
- [x] Anonymous functions for nested: https://stackoverflow.com/questions/74523441/nested-functions-in-o
- [ ] Look into go routines for filling up an array.
- Mhh, it's different from threads.

BIN
go/squiggle Executable file

Binary file not shown.

147
go/squiggle.go Normal file
View File

@ -0,0 +1,147 @@
package main
import "fmt"
import "math"
import "sync"
import rand "math/rand/v2"
type src = *rand.Rand
type func64 = func(src) float64
// https://pkg.go.dev/math/rand/v2
func sample_unit_uniform(r src) float64 {
return r.Float64()
}
func sample_unit_normal(r src) float64 {
return r.NormFloat64()
}
func sample_uniform(start float64, end float64, r src) float64 {
return sample_unit_uniform(r)*(end-start) + start
}
func sample_normal(mean float64, sigma float64, r src) float64 {
return mean + sample_unit_normal(r)*sigma
}
func sample_lognormal(logmean float64, logstd float64, r src) float64 {
return (math.Exp(sample_normal(logmean, logstd, r)))
}
func sample_normal_from_90_ci(low float64, high float64, r src) float64 {
var normal90 float64 = 1.6448536269514727
var mean float64 = (high + low) / 2.0
var std float64 = (high - low) / (2.0 * normal90)
return sample_normal(mean, std, r)
}
func sample_to(low float64, high float64, r src) float64 {
// Given a (positive) 90% confidence interval,
// returns a sample from a lognorma with a matching 90% c.i.
// Key idea: If we want a lognormal with 90% confidence interval [a, b]
// we need but get a normal with 90% confidence interval [log(a), log(b)].
// Then see code for sample_normal_from_90_ci
var loglow float64 = math.Log(low)
var loghigh float64 = math.Log(high)
return math.Exp(sample_normal_from_90_ci(loglow, loghigh, r))
}
func sample_mixture(fs []func64, weights []float64, r src) float64 {
// fmt.Println("weights initially: ", weights)
var sum_weights float64 = 0
for _, weight := range weights {
sum_weights += weight
}
var total float64 = 0
var cumsummed_normalized_weights = append([]float64(nil), weights...)
for i, weight := range weights {
total += weight / sum_weights
cumsummed_normalized_weights[i] = total
}
var result float64
var flag int = 0
var p float64 = r.Float64()
for i, cnw := range cumsummed_normalized_weights {
if p < cnw {
result = fs[i](r)
flag = 1
break
}
}
// fmt.Println(cumsummed_normalized_weights)
if flag == 0 {
result = fs[len(fs)-1](r)
}
return result
}
func slice_fill(xs []float64, fs func64, r src) {
for i := range xs {
xs[i] = fs(r)
}
}
func sample_parallel(f func64, n_samples int) []float64 {
var num_threads = 16
var xs = make([]float64, n_samples)
var wg sync.WaitGroup
var h = n_samples / num_threads
wg.Add(num_threads)
for i := range num_threads {
var xs_i = xs[i*h : (i+1)*h]
go func(f func64) {
defer wg.Done()
var r = rand.New(rand.NewPCG(uint64(i), uint64(i+1)))
for i := range xs_i {
xs_i[i] = f(r)
}
}(f)
}
wg.Wait()
return xs
}
func main() {
var p_a float64 = 0.8
var p_b float64 = 0.5
var p_c float64 = p_a * p_b
ws := [4](float64){1 - p_c, p_c / 2, p_c / 4, p_c / 4}
sample_0 := func(r src) float64 { return 0 }
sample_1 := func(r src) float64 { return 1 }
sample_few := func(r src) float64 { return sample_to(1, 3, r) }
sample_many := func(r src) float64 { return sample_to(2, 10, r) }
fs := [4](func64){sample_0, sample_1, sample_few, sample_many}
model := func(r src) float64 { return sample_mixture(fs[0:], ws[0:], r) }
n_samples := 1_000_000
xs := sample_parallel(model, n_samples)
var avg float64 = 0
for _, x := range xs {
avg += x
}
avg = avg / float64(n_samples)
fmt.Printf("Average: %v\n", avg)
/*
n_samples := 1_000_000
var r = rand.New(rand.NewPCG(uint64(1), uint64(2)))
var avg float64 = 0
for i := 0; i < n_samples; i++ {
avg += sample_mixture(fs[0:], ws[0:], r)
}
avg = avg / float64(n_samples)
fmt.Printf("Average: %v\n", avg)
*/
}

View File

@ -1,7 +1,8 @@
OUTPUT=./samples OUTPUT=./samples
CC=gcc
build: build:
gcc -O3 samples.c ./squiggle_c/squiggle.c ./squiggle_c/squiggle_more.c -lm -fopenmp -o $(OUTPUT) $(CC) -O3 -march=native samples.c ./squiggle_c/squiggle.c ./squiggle_c/squiggle_more.c -lm -fopenmp -o $(OUTPUT)
install: install:
rm -r squiggle_c rm -r squiggle_c

Binary file not shown.

View File

@ -3,7 +3,7 @@
#include <stdio.h> #include <stdio.h>
#include <stdlib.h> #include <stdlib.h>
int main() double sampler_result(uint64_t * seed)
{ {
double p_a = 0.8; double p_a = 0.8;
double p_b = 0.5; double p_b = 0.5;
@ -17,11 +17,12 @@ int main()
int n_dists = 4; int n_dists = 4;
double weights[] = { 1 - p_c, p_c / 2, p_c / 4, p_c / 4 }; double weights[] = { 1 - p_c, p_c / 2, p_c / 4, p_c / 4 };
double (*samplers[])(uint64_t*) = { sample_0, sample_1, sample_few, sample_many }; double (*samplers[])(uint64_t*) = { sample_0, sample_1, sample_few, sample_many };
double sampler_result(uint64_t * seed)
{
return sample_mixture(samplers, weights, n_dists, seed); return sample_mixture(samplers, weights, n_dists, seed);
} }
int main()
{
int n_samples = 1000 * 1000, n_threads = 16; int n_samples = 1000 * 1000, n_threads = 16;
double* results = malloc((size_t)n_samples * sizeof(double)); double* results = malloc((size_t)n_samples * sizeof(double));
sampler_parallel(sampler_result, results, n_threads, n_samples); sampler_parallel(sampler_result, results, n_threads, n_samples);

View File

@ -8,17 +8,20 @@
#include <stdlib.h> #include <stdlib.h>
#include <string.h> // memcpy #include <string.h> // memcpy
/* Parallel sampler */ /* Cache optimizations */
#define CACHE_LINE_SIZE 64 #define CACHE_LINE_SIZE 64
// getconf LEVEL1_DCACHE_LINESIZE
// <https://stackoverflow.com/questions/794632/programmatically-get-the-cache-line-size>
typedef struct seed_cache_box_t { typedef struct seed_cache_box_t {
uint64_t seed; uint64_t seed;
char padding[CACHE_LINE_SIZE - sizeof(uint64_t*)]; char padding[CACHE_LINE_SIZE - sizeof(uint64_t)];
// Cache line size is 64 *bytes*, uint64_t is 64 *bits* (8 bytes). Different units!
} seed_cache_box; } seed_cache_box;
// This avoids "false sharing", i.e., different threads competing for the same cache line // This avoids "false sharing", i.e., different threads competing for the same cache line
// It's possible dealing with this shaves ~2ms // Dealing with this shaves 4ms from a 12ms process, or a third of runtime
// However, it's possible it doesn't, since pointers aren't changed, just their contents (and the location of their contents doesn't necessarily have to be close, since they are malloc'ed sepately) // <http://www.nic.uoregon.edu/~khuck/ts/acumem-report/manual_html/ch06s07.html>
// Still, I thought it was interesting
/* Parallel sampler */
void sampler_parallel(double (*sampler)(uint64_t* seed), double* results, int n_threads, int n_samples) void sampler_parallel(double (*sampler)(uint64_t* seed), double* results, int n_threads, int n_samples)
{ {
@ -41,13 +44,13 @@ void sampler_parallel(double (*sampler)(uint64_t* seed), double* results, int n_
// uint64_t** seeds = malloc((size_t)n_threads * sizeof(uint64_t*)); // uint64_t** seeds = malloc((size_t)n_threads * sizeof(uint64_t*));
seed_cache_box* cache_box = (seed_cache_box*)malloc(sizeof(seed_cache_box) * (size_t)n_threads); seed_cache_box* cache_box = (seed_cache_box*)malloc(sizeof(seed_cache_box) * (size_t)n_threads);
// seed_cache_box cache_box[n_threads]; // we could use the C stack. On normal linux machines, it's 8MB ($ ulimit -s). However, it doesn't quite feel right.
srand(1); srand(1);
for (int i = 0; i < n_threads; i++) { for (int i = 0; i < n_threads; i++) {
// Constraints: // Constraints:
// - xorshift can't start with 0 // - xorshift can't start with 0
// - the seeds should be reasonably separated and not correlated // - the seeds should be reasonably separated and not correlated
cache_box[i].seed = (uint64_t)rand() * (UINT64_MAX / RAND_MAX); cache_box[i].seed = (uint64_t)rand() * (UINT64_MAX / RAND_MAX);
// printf("#%ld: %lu\n",i, *seeds[i]);
// Other initializations tried: // Other initializations tried:
// *seeds[i] = 1 + i; // *seeds[i] = 1 + i;
@ -56,28 +59,53 @@ void sampler_parallel(double (*sampler)(uint64_t* seed), double* results, int n_
} }
int i; int i;
#pragma omp parallel private(i, quotient) #pragma omp parallel private(i)
{ {
#pragma omp for #pragma omp for
for (i = 0; i < n_threads; i++) { for (i = 0; i < n_threads; i++) {
int quotient = n_samples / n_threads; // It's possible I don't need the for, and could instead call omp
// in some different way and get the thread number with omp_get_thread_num()
int lower_bound_inclusive = i * quotient; int lower_bound_inclusive = i * quotient;
int upper_bound_not_inclusive = ((i + 1) * quotient); // note the < in the for loop below, int upper_bound_not_inclusive = ((i + 1) * quotient); // note the < in the for loop below,
for (int j = lower_bound_inclusive; j < upper_bound_not_inclusive; j++) { for (int j = lower_bound_inclusive; j < upper_bound_not_inclusive; j++) {
results[j] = sampler(&(cache_box[i].seed)); results[j] = sampler(&(cache_box[i].seed));
// Could also result in inefficient cache stuff, but hopefully not too often /*
t starts at 0 and ends at T
at t=0,
thread i accesses: results[i*quotient +0],
thread i+1 acccesses: results[(i+1)*quotient +0]
at t=T
thread i accesses: results[(i+1)*quotient -1]
thread i+1 acccesses: results[(i+2)*quotient -1]
The results[j] that are directly adjacent are
results[(i+1)*quotient -1] (accessed by thread i at time T)
results[(i+1)*quotient +0] (accessed by thread i+1 at time 0)
and these are themselves adjacent to
results[(i+1)*quotient -2] (accessed by thread i at time T-1)
results[(i+1)*quotient +1] (accessed by thread i+1 at time 2)
If T is large enough, which it is, two threads won't access the same
cache line at the same time.
Pictorially:
at t=0 ....i.........I.........
at t=T .............i.........I
and the two never overlap
Note that results[j] is a double, a double has 8 bytes (64 bits)
8 doubles fill a cache line of 64 bytes.
So we specifically won't get problems as long as n_samples/n_threads > 8
n_threads is normally 16, so n_samples > 128
Note also that this is only a problem in terms of speed, if n_samples<128
the results are still computed, it'll just be slower
*/
} }
} }
} }
for (int j = divisor_multiple; j < n_samples; j++) { for (int j = divisor_multiple; j < n_samples; j++) {
results[j] = sampler(&(cache_box[0].seed)); results[j] = sampler(&(cache_box[0].seed));
// we can just reuse a seed, this isn't problematic because we are not doing multithreading // we can just reuse a seed,
// this isn't problematic because we;ve now stopped doing multithreading
} }
/*
for (int i = 0; i < n_threads; i++) {
free(cache_box[i].seed);
}
*/
free(cache_box); free(cache_box);
} }
@ -88,7 +116,7 @@ typedef struct ci_t {
double high; double high;
} ci; } ci;
static void swp(int i, int j, double xs[]) inline static void swp(int i, int j, double xs[])
{ {
double tmp = xs[i]; double tmp = xs[i];
xs[i] = xs[j]; xs[i] = xs[j];
@ -161,18 +189,222 @@ ci array_get_90_ci(double xs[], int n)
return array_get_ci((ci) { .low = 0.05, .high = 0.95 }, xs, n); return array_get_ci((ci) { .low = 0.05, .high = 0.95 }, xs, n);
} }
ci sampler_get_ci(ci interval, double (*sampler)(uint64_t*), int n, uint64_t* seed) double array_get_median(double xs[], int n)
{ {
UNUSED(seed); // don't want to use it right now, but want to preserve ability to do so (e.g., remove parallelism from internals). Also nicer for consistency. int median_k = (int)floor(0.5 * n);
double* xs = malloc((size_t)n * sizeof(double)); return quickselect(median_k, xs, n);
sampler_parallel(sampler, xs, 16, n);
ci result = array_get_ci(interval, xs, n);
free(xs);
return result;
} }
ci sampler_get_90_ci(double (*sampler)(uint64_t*), int n, uint64_t* seed)
/* array print: potentially useful for debugging */
void array_print(double xs[], int n)
{ {
return sampler_get_ci((ci) { .low = 0.05, .high = 0.95 }, sampler, n, seed); printf("[");
for (int i = 0; i < n - 1; i++) {
printf("%f, ", xs[i]);
}
printf("%f", xs[n - 1]);
printf("]\n");
}
void array_print_stats(double xs[], int n)
{
ci ci_90 = array_get_ci((ci) { .low = 0.05, .high = 0.95 }, xs, n);
ci ci_80 = array_get_ci((ci) { .low = 0.1, .high = 0.9 }, xs, n);
ci ci_50 = array_get_ci((ci) { .low = 0.25, .high = 0.75 }, xs, n);
double median = array_get_median(xs, n);
double mean = array_mean(xs, n);
double std = array_std(xs, n);
printf("| Statistic | Value |\n"
"| --- | --- |\n"
"| Mean | %lf |\n"
"| Median | %lf |\n"
"| Std | %lf |\n"
"| 90%% confidence interval | %lf to %lf |\n"
"| 80%% confidence interval | %lf to %lf |\n"
"| 50%% confidence interval | %lf to %lf |\n",
mean, median, std, ci_90.low, ci_90.high, ci_80.low, ci_80.high, ci_50.low, ci_50.high);
}
void array_print_histogram(double* xs, int n_samples, int n_bins)
{
// Interface inspired by <https://github.com/red-data-tools/YouPlot>
if (n_bins <= 1) {
fprintf(stderr, "Number of bins must be greater than 1.\n");
return;
} else if (n_samples <= 1) {
fprintf(stderr, "Number of samples must be higher than 1.\n");
return;
}
int* bins = (int*)calloc((size_t)n_bins, sizeof(int));
if (bins == NULL) {
fprintf(stderr, "Memory allocation for bins failed.\n");
return;
}
// Find the minimum and maximum values from the samples
double min_value = xs[0], max_value = xs[0];
for (int i = 0; i < n_samples; i++) {
if (xs[i] < min_value) {
min_value = xs[i];
}
if (xs[i] > max_value) {
max_value = xs[i];
}
}
// Avoid division by zero for a single unique value
if (min_value == max_value) {
max_value++;
}
// Calculate bin width
double bin_width = (max_value - min_value) / n_bins;
// Fill the bins with sample counts
for (int i = 0; i < n_samples; i++) {
int bin_index = (int)((xs[i] - min_value) / bin_width);
if (bin_index == n_bins) {
bin_index--; // Last bin includes max_value
}
bins[bin_index]++;
}
// Calculate the scaling factor based on the maximum bin count
int max_bin_count = 0;
for (int i = 0; i < n_bins; i++) {
if (bins[i] > max_bin_count) {
max_bin_count = bins[i];
}
}
const int MAX_WIDTH = 50; // Adjust this to your terminal width
double scale = max_bin_count > MAX_WIDTH ? (double)MAX_WIDTH / max_bin_count : 1.0;
// Print the histogram
for (int i = 0; i < n_bins; i++) {
double bin_start = min_value + i * bin_width;
double bin_end = bin_start + bin_width;
int decimalPlaces = 1;
if ((0 < bin_width) && (bin_width < 1)) {
int magnitude = (int)floor(log10(bin_width));
decimalPlaces = -magnitude;
decimalPlaces = decimalPlaces > 10 ? 10 : decimalPlaces;
}
printf("[%*.*f, %*.*f", 4 + decimalPlaces, decimalPlaces, bin_start, 4 + decimalPlaces, decimalPlaces, bin_end);
char interval_delimiter = ')';
if (i == (n_bins - 1)) {
interval_delimiter = ']'; // last bucket is inclusive
}
printf("%c: ", interval_delimiter);
int marks = (int)(bins[i] * scale);
for (int j = 0; j < marks; j++) {
printf("");
}
printf(" %d\n", bins[i]);
}
// Free the allocated memory for bins
free(bins);
}
void array_print_90_ci_histogram(double* xs, int n_samples, int n_bins)
{
// Code duplicated from previous function
// I'll consider simplifying it at some future point
// Possible ideas:
// - having only one function that takes any confidence interval?
// - having a utility function that is called by both functions?
ci ci_90 = array_get_90_ci(xs, n_samples);
if (n_bins <= 1) {
fprintf(stderr, "Number of bins must be greater than 1.\n");
return;
} else if (n_samples <= 10) {
fprintf(stderr, "Number of samples must be higher than 10.\n");
return;
}
int* bins = (int*)calloc((size_t)n_bins, sizeof(int));
if (bins == NULL) {
fprintf(stderr, "Memory allocation for bins failed.\n");
return;
}
double min_value = ci_90.low, max_value = ci_90.high;
// Avoid division by zero for a single unique value
if (min_value == max_value) {
max_value++;
}
double bin_width = (max_value - min_value) / n_bins;
// Fill the bins with sample counts
int below_min = 0, above_max = 0;
for (int i = 0; i < n_samples; i++) {
if (xs[i] < min_value) {
below_min++;
} else if (xs[i] > max_value) {
above_max++;
} else {
int bin_index = (int)((xs[i] - min_value) / bin_width);
if (bin_index == n_bins) {
bin_index--; // Last bin includes max_value
}
bins[bin_index]++;
}
}
// Calculate the scaling factor based on the maximum bin count
int max_bin_count = 0;
for (int i = 0; i < n_bins; i++) {
if (bins[i] > max_bin_count) {
max_bin_count = bins[i];
}
}
const int MAX_WIDTH = 40; // Adjust this to your terminal width
double scale = max_bin_count > MAX_WIDTH ? (double)MAX_WIDTH / max_bin_count : 1.0;
// Print the histogram
int decimalPlaces = 1;
if ((0 < bin_width) && (bin_width < 1)) {
int magnitude = (int)floor(log10(bin_width));
decimalPlaces = -magnitude;
decimalPlaces = decimalPlaces > 10 ? 10 : decimalPlaces;
}
printf("(%*s, %*.*f): ", 6 + decimalPlaces, "-∞", 4 + decimalPlaces, decimalPlaces, min_value);
int marks_below_min = (int)(below_min * scale);
for (int j = 0; j < marks_below_min; j++) {
printf("");
}
printf(" %d\n", below_min);
for (int i = 0; i < n_bins; i++) {
double bin_start = min_value + i * bin_width;
double bin_end = bin_start + bin_width;
printf("[%*.*f, %*.*f", 4 + decimalPlaces, decimalPlaces, bin_start, 4 + decimalPlaces, decimalPlaces, bin_end);
char interval_delimiter = ')';
if (i == (n_bins - 1)) {
interval_delimiter = ']'; // last bucket is inclusive
}
printf("%c: ", interval_delimiter);
int marks = (int)(bins[i] * scale);
for (int j = 0; j < marks; j++) {
printf("");
}
printf(" %d\n", bins[i]);
}
printf("(%*.*f, %*s): ", 4 + decimalPlaces, decimalPlaces, max_value, 6 + decimalPlaces, "+∞");
int marks_above_max = (int)(above_max * scale);
for (int j = 0; j < marks_above_max; j++) {
printf("");
}
printf(" %d\n", above_max);
// Free the allocated memory for bins
free(bins);
} }
/* Algebra manipulations */ /* Algebra manipulations */
@ -225,216 +457,3 @@ ci convert_lognormal_params_to_ci(lognormal_params y)
ci result = { .low = exp(loglow), .high = exp(loghigh) }; ci result = { .low = exp(loglow), .high = exp(loghigh) };
return result; return result;
} }
/* Scaffolding to handle errors */
// We will sample from an arbitrary cdf
// and that operation might fail
// so we build some scaffolding here
#define MAX_ERROR_LENGTH 500
#define EXIT_ON_ERROR 0
#define PROCESS_ERROR(error_msg) process_error(error_msg, EXIT_ON_ERROR, __FILE__, __LINE__)
typedef struct box_t {
int empty;
double content;
char* error_msg;
} box;
box process_error(const char* error_msg, int should_exit, char* file, int line)
{
if (should_exit) {
printf("%s, @, in %s (%d)", error_msg, file, line);
exit(1);
} else {
char error_msg[MAX_ERROR_LENGTH];
snprintf(error_msg, MAX_ERROR_LENGTH, "@, in %s (%d)", file, line); // NOLINT: We are being carefull here by considering MAX_ERROR_LENGTH explicitly.
box error = { .empty = 1, .error_msg = error_msg };
return error;
}
}
/* Invert an arbitrary cdf at a point */
// Version #1:
// - input: (cdf: double => double, p)
// - output: Box(number|error)
box inverse_cdf_double(double cdf(double), double p)
{
// given a cdf: [-Inf, Inf] => [0,1]
// returns a box with either
// x such that cdf(x) = p
// or an error
// if EXIT_ON_ERROR is set to 1, it exits instead of providing an error
double low = -1.0;
double high = 1.0;
// 1. Make sure that cdf(low) < p < cdf(high)
int interval_found = 0;
while ((!interval_found) && (low > -DBL_MAX / 4) && (high < DBL_MAX / 4)) {
// for floats, use FLT_MAX instead
// Note that this approach is overkill
// but it's also the *correct* thing to do.
int low_condition = (cdf(low) < p);
int high_condition = (p < cdf(high));
if (low_condition && high_condition) {
interval_found = 1;
} else if (!low_condition) {
low = low * 2;
} else if (!high_condition) {
high = high * 2;
}
}
if (!interval_found) {
return PROCESS_ERROR("Interval containing the target value not found, in function inverse_cdf");
} else {
int convergence_condition = 0;
int count = 0;
while (!convergence_condition && (count < (INT_MAX / 2))) {
double mid = (high + low) / 2;
int mid_not_new = (mid == low) || (mid == high);
// double width = high - low;
// if ((width < 1e-8) || mid_not_new){
if (mid_not_new) {
convergence_condition = 1;
} else {
double mid_sign = cdf(mid) - p;
if (mid_sign < 0) {
low = mid;
} else if (mid_sign > 0) {
high = mid;
} else if (mid_sign == 0) {
low = mid;
high = mid;
}
}
}
if (convergence_condition) {
box result = { .empty = 0, .content = low };
return result;
} else {
return PROCESS_ERROR("Search process did not converge, in function inverse_cdf");
}
}
}
// Version #2:
// - input: (cdf: double => Box(number|error), p)
// - output: Box(number|error)
box inverse_cdf_box(box cdf_box(double), double p)
{
// given a cdf: [-Inf, Inf] => Box([0,1])
// returns a box with either
// x such that cdf(x) = p
// or an error
// if EXIT_ON_ERROR is set to 1, it exits instead of providing an error
double low = -1.0;
double high = 1.0;
// 1. Make sure that cdf(low) < p < cdf(high)
int interval_found = 0;
while ((!interval_found) && (low > -DBL_MAX / 4) && (high < DBL_MAX / 4)) {
// for floats, use FLT_MAX instead
// Note that this approach is overkill
// but it's also the *correct* thing to do.
box cdf_low = cdf_box(low);
if (cdf_low.empty) {
return PROCESS_ERROR(cdf_low.error_msg);
}
box cdf_high = cdf_box(high);
if (cdf_high.empty) {
return PROCESS_ERROR(cdf_low.error_msg);
}
int low_condition = (cdf_low.content < p);
int high_condition = (p < cdf_high.content);
if (low_condition && high_condition) {
interval_found = 1;
} else if (!low_condition) {
low = low * 2;
} else if (!high_condition) {
high = high * 2;
}
}
if (!interval_found) {
return PROCESS_ERROR("Interval containing the target value not found, in function inverse_cdf");
} else {
int convergence_condition = 0;
int count = 0;
while (!convergence_condition && (count < (INT_MAX / 2))) {
double mid = (high + low) / 2;
int mid_not_new = (mid == low) || (mid == high);
// double width = high - low;
if (mid_not_new) {
// if ((width < 1e-8) || mid_not_new){
convergence_condition = 1;
} else {
box cdf_mid = cdf_box(mid);
if (cdf_mid.empty) {
return PROCESS_ERROR(cdf_mid.error_msg);
}
double mid_sign = cdf_mid.content - p;
if (mid_sign < 0) {
low = mid;
} else if (mid_sign > 0) {
high = mid;
} else if (mid_sign == 0) {
low = mid;
high = mid;
}
}
}
if (convergence_condition) {
box result = { .empty = 0, .content = low };
return result;
} else {
return PROCESS_ERROR("Search process did not converge, in function inverse_cdf");
}
}
}
/* Sample from an arbitrary cdf */
// Before: invert an arbitrary cdf at a point
// Now: from an arbitrary cdf, get a sample
box sampler_cdf_box(box cdf(double), uint64_t* seed)
{
double p = sample_unit_uniform(seed);
box result = inverse_cdf_box(cdf, p);
return result;
}
box sampler_cdf_double(double cdf(double), uint64_t* seed)
{
double p = sample_unit_uniform(seed);
box result = inverse_cdf_double(cdf, p);
return result;
}
double sampler_cdf_danger(box cdf(double), uint64_t* seed)
{
double p = sample_unit_uniform(seed);
box result = inverse_cdf_box(cdf, p);
if (result.empty) {
exit(1);
} else {
return result.content;
}
}
/* array print: potentially useful for debugging */
void array_print(double xs[], int n)
{
printf("[");
for (int i = 0; i < n - 1; i++) {
printf("%f, ", xs[i]);
}
printf("%f", xs[n - 1]);
printf("]\n");
}

View File

@ -4,15 +4,18 @@
/* Parallel sampling */ /* Parallel sampling */
void sampler_parallel(double (*sampler)(uint64_t* seed), double* results, int n_threads, int n_samples); void sampler_parallel(double (*sampler)(uint64_t* seed), double* results, int n_threads, int n_samples);
/* Get 90% confidence interval */ /* Stats */
double array_get_median(double xs[], int n);
typedef struct ci_t { typedef struct ci_t {
double low; double low;
double high; double high;
} ci; } ci;
ci array_get_ci(ci interval, double* xs, int n); ci array_get_ci(ci interval, double* xs, int n);
ci array_get_90_ci(double xs[], int n); ci array_get_90_ci(double xs[], int n);
ci sampler_get_ci(ci interval, double (*sampler)(uint64_t*), int n, uint64_t* seed);
ci sampler_get_90_ci(double (*sampler)(uint64_t*), int n, uint64_t* seed); void array_print_stats(double xs[], int n);
void array_print_histogram(double* xs, int n_samples, int n_bins);
void array_print_90_ci_histogram(double* xs, int n, int n_bins);
/* Algebra manipulations */ /* Algebra manipulations */
@ -31,24 +34,9 @@ lognormal_params algebra_product_lognormals(lognormal_params a, lognormal_params
lognormal_params convert_ci_to_lognormal_params(ci x); lognormal_params convert_ci_to_lognormal_params(ci x);
ci convert_lognormal_params_to_ci(lognormal_params y); ci convert_lognormal_params_to_ci(lognormal_params y);
/* Error handling */ /* Utilities */
typedef struct box_t {
int empty;
double content;
char* error_msg;
} box;
#define MAX_ERROR_LENGTH 500
#define EXIT_ON_ERROR 0
#define PROCESS_ERROR(error_msg) process_error(error_msg, EXIT_ON_ERROR, __FILE__, __LINE__)
box process_error(const char* error_msg, int should_exit, char* file, int line);
void array_print(double* array, int length);
/* Inverse cdf */ #define THOUSAND 1000
box inverse_cdf_double(double cdf(double), double p); #define MILLION 1000000
box inverse_cdf_box(box cdf_box(double), double p);
/* Samplers from cdf */
box sampler_cdf_double(double cdf(double), uint64_t* seed);
box sampler_cdf_box(box cdf(double), uint64_t* seed);
#endif #endif

View File

@ -1,39 +1,39 @@
# bc # bc
time ghbc -l squiggle.bc estimate.bc time ghbc -l squiggle.bc estimate.bc
.8907201178102747 .8872657001481914
real 0m15.960s real 0m16.170s
user 0m15.948s user 0m16.115s
sys 0m0.000s sys 0m0.008s
# C # C
Running 100x and taking avg time: OMP_NUM_THREADS=16 out/samples Running 100x and taking avg time: OMP_NUM_THREADS=16 out/samples
Time using 16 threads: 5.60ms Time using 16 threads: 6.20ms
# js (bun) # js (bun)
0.8867426270252042 0.8861715640546732
real 0m0.551s real 0m0.562s
user 0m0.527s user 0m0.540s
sys 0m0.055s sys 0m0.074s
# js (node) # js (node)
0.8878977218582866 0.8863245179136781
real 0m0.445s real 0m0.423s
user 0m0.523s user 0m0.509s
sys 0m0.060s sys 0m0.077s
# lua (luajit) # lua (luajit)
Requires /bin/time, found on GNU/Linux systems Requires /bin/time, found on GNU/Linux systems
Running 100x and taking avg time of: luajit samples.lua Running 100x and taking avg time of: luajit samples.lua
Time: 69.90ms Time: 68.80ms
@ -41,7 +41,7 @@ Time: 69.90ms
Requires /bin/time, found on GNU/Linux systems Requires /bin/time, found on GNU/Linux systems
Running 100x and taking avg time of: Running 100x and taking avg time of:
Time: 40.80ms Time: 41.10ms
@ -49,48 +49,47 @@ Time: 40.80ms
Requires /bin/time, found on GNU/Linux systems Requires /bin/time, found on GNU/Linux systems
Running 100x and taking avg time of: Running 100x and taking avg time of:
Time: 187.90ms Time: 185.50ms
# Python (3.9) # Python (3.9)
0.8887373869178242 0.8887373869178242
real 0m11.879s real 0m11.909s
user 0m12.129s user 0m12.149s
sys 0m1.055s sys 0m1.145s
# R (3.6.1) # R (3.6.1)
[1] 0.8899922 [1] 0.8862725
real 0m4.508s real 0m4.494s
user 0m4.476s user 0m4.465s
sys 0m0.028s sys 0m0.025s
# Squiggle (0.8.6) # Squiggle (0.8.6)
Requires /bin/time, found on GNU/Linux systems Requires /bin/time, found on GNU/Linux systems
Running 100x and taking avg time of: Running 100x and taking avg time of:
Time: 386.80ms Time: 384.00ms
# SquigglePy (0.27) # SquigglePy (0.27)
time python3.9 samples.py time python3.9 samples.py
0%| | 0/4 [00:00<?, ?it/s] 75%|███████▌ | 3/4 [00:00<00:00, 27.07it/s] 100%|██████████| 4/4 [00:00<00:00, 23.38it/s] 100%|█████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 22.58it/s]
0%| | 0/1000000 [00:00<?, ?it/s] 10%|█ | 104035/1000000 [00:00<00:00, 1040346.03it/s] 24%|██▍ | 238684/1000000 [00:00<00:00, 1220429.41it/s] 38%|███▊ | 376402/1000000 [00:00<00:00, 1292004.08it/s] 51%|█████▏ | 514235/1000000 [00:00<00:00, 1326083.80it/s] 65%|██████▌ | 654235/1000000 [00:00<00:00, 1352735.46it/s] 80%|███████▉ | 795746/1000000 [00:00<00:00, 1373942.14it/s] 93%|█████████▎| 934912/1000000 [00:00<00:00, 1379731.72it/s] it/s] it/s]
0.8879525229675179 0.8876134007583529
real 0m1.507s real 0m1.542s
user 0m1.969s user 0m1.989s
sys 0m2.201s sys 0m2.226s
# squiggle.c # squiggle.c
Running 100x and taking avg time: OMP_NUM_THREADS=16 ./samples Running 100x and taking avg time: OMP_NUM_THREADS=16 ./samples
Time using 16 threads: 12.70ms Time using 16 threads: 7.20ms