time-to-botec/squiggle/node_modules/@stdlib/stats/base/dvariancetk
NunoSempere b6addc7f05 feat: add the node modules
Necessary in order to clearly see the squiggle hotwiring.
2022-12-03 12:44:49 +00:00
..
docs feat: add the node modules 2022-12-03 12:44:49 +00:00
include/stdlib/stats/base feat: add the node modules 2022-12-03 12:44:49 +00:00
lib feat: add the node modules 2022-12-03 12:44:49 +00:00
src feat: add the node modules 2022-12-03 12:44:49 +00:00
binding.gyp feat: add the node modules 2022-12-03 12:44:49 +00:00
include.gypi feat: add the node modules 2022-12-03 12:44:49 +00:00
manifest.json feat: add the node modules 2022-12-03 12:44:49 +00:00
package.json feat: add the node modules 2022-12-03 12:44:49 +00:00
README.md feat: add the node modules 2022-12-03 12:44:49 +00:00

dvariancetk

Calculate the variance of a double-precision floating-point strided array using a one-pass textbook algorithm.

The population variance of a finite size population of size N is given by

Equation for the population variance.

where the population mean is given by

Equation for the population mean.

After rearranging terms, the population variance can be equivalently expressed as

Equation for the population variance (one-pass textbook formula).

Often in the analysis of data, the true population variance is not known a priori and must be estimated from a sample drawn from the population distribution. If one attempts to use the formula for the population variance, the result is biased and yields a biased sample variance. To compute an unbiased sample variance for a sample of size n,

Equation for computing an unbiased sample variance.

where the sample mean is given by

Equation for the sample mean.

Similar to the population variance, after rearranging terms, the unbiased sample variance can be equivalently expressed as

Equation for the unbiased sample variance (one-pass textbook formula).

The use of the term n-1 is commonly referred to as Bessel's correction. Note, however, that applying Bessel's correction can increase the mean squared error between the sample variance and population variance. Depending on the characteristics of the population distribution, other correction factors (e.g., n-1.5, n+1, etc) can yield better estimators.

Usage

var dvariancetk = require( '@stdlib/stats/base/dvariancetk' );

dvariancetk( N, correction, x, stride )

Computes the variance of a double-precision floating-point strided array x using a one-pass textbook algorithm.

var Float64Array = require( '@stdlib/array/float64' );

var x = new Float64Array( [ 1.0, -2.0, 2.0 ] );
var N = x.length;

var v = dvariancetk( N, 1, x, 1 );
// returns ~4.3333

The function has the following parameters:

  • N: number of indexed elements.
  • correction: degrees of freedom adjustment. Setting this parameter to a value other than 0 has the effect of adjusting the divisor during the calculation of the variance according to N-c where c corresponds to the provided degrees of freedom adjustment. When computing the variance of a population, setting this parameter to 0 is the standard choice (i.e., the provided array contains data constituting an entire population). When computing the unbiased sample variance, setting this parameter to 1 is the standard choice (i.e., the provided array contains data sampled from a larger population; this is commonly referred to as Bessel's correction).
  • x: input Float64Array.
  • stride: index increment for x.

The N and stride parameters determine which elements in x are accessed at runtime. For example, to compute the variance of every other element in x,

var Float64Array = require( '@stdlib/array/float64' );
var floor = require( '@stdlib/math/base/special/floor' );

var x = new Float64Array( [ 1.0, 2.0, 2.0, -7.0, -2.0, 3.0, 4.0, 2.0 ] );
var N = floor( x.length / 2 );

var v = dvariancetk( N, 1, x, 2 );
// returns 6.25

Note that indexing is relative to the first index. To introduce an offset, use typed array views.

var Float64Array = require( '@stdlib/array/float64' );
var floor = require( '@stdlib/math/base/special/floor' );

var x0 = new Float64Array( [ 2.0, 1.0, 2.0, -2.0, -2.0, 2.0, 3.0, 4.0 ] );
var x1 = new Float64Array( x0.buffer, x0.BYTES_PER_ELEMENT*1 ); // start at 2nd element

var N = floor( x0.length / 2 );

var v = dvariancetk( N, 1, x1, 2 );
// returns 6.25

dvariancetk.ndarray( N, correction, x, stride, offset )

Computes the variance of a double-precision floating-point strided array using a one-pass textbook algorithm and alternative indexing semantics.

var Float64Array = require( '@stdlib/array/float64' );

var x = new Float64Array( [ 1.0, -2.0, 2.0 ] );
var N = x.length;

var v = dvariancetk.ndarray( N, 1, x, 1, 0 );
// returns ~4.33333

The function has the following additional parameters:

  • offset: starting index for x.

While typed array views mandate a view offset based on the underlying buffer, the offset parameter supports indexing semantics based on a starting index. For example, to calculate the variance for every other value in x starting from the second value

var Float64Array = require( '@stdlib/array/float64' );
var floor = require( '@stdlib/math/base/special/floor' );

var x = new Float64Array( [ 2.0, 1.0, 2.0, -2.0, -2.0, 2.0, 3.0, 4.0 ] );
var N = floor( x.length / 2 );

var v = dvariancetk.ndarray( N, 1, x, 2, 1 );
// returns 6.25

Notes

  • If N <= 0, both functions return NaN.
  • If N - c is less than or equal to 0 (where c corresponds to the provided degrees of freedom adjustment), both functions return NaN.
  • Some caution should be exercised when using the one-pass textbook algorithm. Literature overwhelmingly discourages the algorithm's use for two reasons: 1) the lack of safeguards against underflow and overflow and 2) the risk of catastrophic cancellation when subtracting the two sums if the sums are large and the variance small. These concerns have merit; however, the one-pass textbook algorithm should not be dismissed outright. For data distributions with a moderately large standard deviation to mean ratio (i.e., coefficient of variation), the one-pass textbook algorithm may be acceptable, especially when performance is paramount and some precision loss is acceptable (including a risk of returning a negative variance due to floating-point rounding errors!). In short, no single "best" algorithm for computing the variance exists. The "best" algorithm depends on the underlying data distribution, your performance requirements, and your minimum precision requirements. When evaluating which algorithm to use, consider the relative pros and cons, and choose the algorithm which best serves your needs.

Examples

var randu = require( '@stdlib/random/base/randu' );
var round = require( '@stdlib/math/base/special/round' );
var Float64Array = require( '@stdlib/array/float64' );
var dvariancetk = require( '@stdlib/stats/base/dvariancetk' );

var x;
var i;

x = new Float64Array( 10 );
for ( i = 0; i < x.length; i++ ) {
    x[ i ] = round( (randu()*100.0) - 50.0 );
}
console.log( x );

var v = dvariancetk( x.length, 1, x, 1 );
console.log( v );

References

  • Ling, Robert F. 1974. "Comparison of Several Algorithms for Computing Sample Means and Variances." Journal of the American Statistical Association 69 (348). American Statistical Association, Taylor & Francis, Ltd.: 85966. doi:10.2307/2286154.