Introduction
This page describes some new pseudorandom number generators (PRNGs) we (David Blackman and I) have been working on recently, and
a shootout comparing them with other generators. Details about the generators can
be found in our paper. Information about my previous xorshift
-based
generators can be found here, but they have been entirely superseded by the new ones, which
are faster and better. As part of our study, we developed a very strong test for Hamming-weight dependencies
that gave a number of surprising results.
64-bit Generators
xoshiro256++
/xoshiro256**
(XOR/shift/rotate) are our all-purpose
generators (not cryptographically secure generators, though,
like all PRNGs in these pages). They have excellent (sub-ns) speed, a state
space (256 bits) that is large enough for any parallel application, and
they pass all tests we are aware of. See the paper
for a discussion of their differences.
If, however, one has to generate only 64-bit floating-point numbers
(by extracting the upper 53 bits) xoshiro256+
is a slightly (≈15%)
faster generator with analogous statistical properties. For general
usage, one has to consider that its lowest bits have low linear
complexity and will fail linearity tests; however, low linear
complexity of the lowest bits can have hardly any impact in practice, and certainly has no
impact at all if you generate floating-point numbers using the upper bits (we computed a precise
estimate of the linear complexity of the lowest bits).
If you are tight on space, xoroshiro128++
/xoroshiro128**
(XOR/rotate/shift/rotate) and xoroshiro128+
have the same
speed and use half of the space; the same comments apply. They are suitable only for
low-scale parallel applications; moreover, xoroshiro128+
exhibits a mild dependency in Hamming weights that generates a failure
after 5 TB of output in our test. We believe
this slight bias cannot affect any application.
Finally, if for any reason (which reason?) you need more
state, we provide in the same
vein xoshiro512++
/ xoshiro512**
/ xoshiro512+
and
xoroshiro1024++
/ xoroshiro1024**
/ xoroshiro1024*
(see the paper).
All generators, being based on linear recurrences, provide jump functions that make it possible to simulate any number of calls to the next-state function in constant time, once a suitable jump polynomial has been computed. We provide ready-made jump functions for a number of calls equal to the square root of the period, to make it easy generating non-overlapping sequences for parallel computations, and equal to the cube of the fourth root of the period, to make it possible to generate independent sequences on different parallel processors.
We suggest to use SplitMix64 to initialize the state of our generators starting from a 64-bit seed, as research has shown that initialization must be performed with a generator radically different in nature from the one initialized to avoid correlation on similar seeds.
32-bit Generators
xoshiro128++
/xoshiro128**
are our
32-bit all-purpose generators, whereas xoshiro128+
is
for floating-point generation. They are the 32-bit counterpart of
xoshiro256++
, xoshiro256**
and xoshiro256+
, so similar comments apply.
Their state is too small for
large-scale parallelism: their intended usage is inside embedded
hardware or GPUs. For an even smaller scale, you can use xoroshiro64**
and xoroshiro64*
. We not believe
at this point in time 32-bit generator with a larger state can be of
any use (but there are 32-bit xoroshiro
generators of much larger size).
All 32-bit generators pass all tests we are aware of, with the
exception of linearity tests (binary rank and linear complexity) for
xoshiro128+
and xoroshiro64*
: in this case,
due to the smaller number of output bits the low linear complexity of the
lowest bits is sufficient to trigger BigCrush tests when the output is bit-reversed. Analogously to
the 64-bit case, generating 32-bit floating-point number using the
upper bits will not use any of the bits with low linear complexity.
16-bit Generators
We do not suggest any particular 16-bit generator, but it is possible
to design relatively good ones using our techniques. For example,
Parallax has embedded in their Propeller 2 microcontroller multiple 16-bit
xoroshiro32++
generators.
Congruential Generators
In case you are interested in 64-bit PRNGs based on congruential arithmetic, I provide
three instances of a
Marsaglia's Multiply-With-Carry generators,
MWC128
, MWC192
, and MWC256
, for which I computed good constants. They are some
of the fastest generator available, but they need 128-bit operations.
Stronger theoretical guarantees are provided by the
generalized multiply-with-carry generators defined by Goresky and Klapper:
also in this case I provide two instances, GMWC128
and GMWC256
, for which I computed good constants.
This generators, however, are about twice slower than MWC generators.
JavaScript
xorshift128+
is presently used in the JavaScript engines of
Chrome,
Node.js,
Firefox,
Safari and
Microsoft Edge.
Rust
The SmallRng from the rand
crate is xoshiro256++
or xoshiro128++
, depending
on the platform.
java.util.random
I worked with Guy Steele at the new
family of PRNGs available in Java 17. The family, called LXM, uses new, better
tables of multipliers for LCGs with power-of-two moduli. Moreover,
java.util.random
contains ready-to-use implementations of
xoroshiro128++
and xoshiro256++
.
.NET
In version 6, Microsoft's .NET framework has adopted
xoshiro256**
and xoshiro128**
as default PRNGs.
Erlang
The parallel functional language Erlang implements several
variants of xorshift
/xoroshiro
-based generators adapted in collaboration with Raimo Niskanen for Erlang's
58/59-bit arithmetic.
GNU FORTRAN
GNU's implementation of the FORTRAN language uses
xoshiro256**
as default PRNG.
Julia
The Julia programming language uses
xoshiro256++
as default PRNG.
Lua
The scripting language Lua uses xoshiro256**
as default PRNG.
IoT
The IoT operating systems Mbed and Zephyr use
xoroshiro128+
as default PRNG.
A PRNG Shootout
I provide here a shootout of a few recent 64-bit PRNGs that are quite widely used. The purpose is that of providing a consistent, reproducible assessment of two properties of the generators: speed and quality. The code used to perform the tests and all the output from statistical test suites is available for download.
Speed
The speed reported in this page is the time required to emit 64
random bits, and the number of clock cycles required to generate a byte (thanks to the PAPI library). If a generator is 32-bit in nature, I glue two
consecutive outputs. Note that
I do not report results using GPUs or SSE instructions, with an exception for the very common SFMT: for that to be
meaningful, I should have implementations for all generators.
Otherwise, with suitable hardware support I could just use AES in
counter mode and get 64 secure bits in 0.56 ns (or just use Randen). The tests were performed on a
12th Gen Intel® Core™ i7-12700KF @3.60GHz using gcc
12.2.1.
A few caveats:
- There is some looping overhead, but subtracting it from the timings is not going to be particularly meaningful due to instruction rescheduling, etc.
- Relative speed might be different on different CPUs and on different scenarios.
- I do not use
-march=native
, which can improve the timing of some generators by vectorization or special instructions, because those improvements might not be possible when the generator is embedded in user code. - Code has been compiled using
gcc
's-fno-unroll-loops
option. This options is essential to get a sensible result: without it, the compiler may perform different loop unrolling depending on the generator. Previosuly I was using also-fno-move-loop-invariants
, which was essential in not giving generators using several large constants an advantage by preventing the compiler from loading them into registers. However, as ofgcc
12.2.1 the compiler loads the constants into registers anyway, so the option is no longer used. Timings withclang
at the time of this writing are very close to those obtained withgcc
. If you find timings that are significantly better than those shown here on comparable hardware, they are likely to be due to compiler artifacts (e.g., vectorization). - Timings are taken running a generator for billions of times in a loop; but this is not the way you use generators. Register allocation might be very different when the generator is embedded in an application, leading to constants being reloaded or part of the state space being written to main memory at each iteration. These costs do not appear in the benchmarks below.
To ease replicability, I distribute a harness performing the measurement. You just
have to define a next()
function and include the harness. But the only realistic
suggestion is to try different generators in your application and see what happens.
Quality
This is probably the more elusive property of a PRNG. Here quality is measured using the powerful BigCrush suite of tests. BigCrush is part of TestU01, a monumental framework for testing PRNGs developed by Pierre L'Ecuyer and Richard Simard (“TestU01: A C library for empirical testing of random number generators”, ACM Trans. Math. Softw. 33(4), Article 22, 2007).
I run BigCrush starting from 100 equispaced points of the state space of the generator and collect failures—tests in which the p-value statistics is outside the interval [0.001..0.999]. A failure is systematic if it happens at all points.
Note that TestU01 is a 32-bit test suite. Thus, two 32-bit integer values are passed to the test suite for each generated 64-bit value. Floating point numbers are generated instead by dividing the unsigned output of the generator by 264. Since this implies a bias towards the high bits (which is anyway a known characteristic of TestU01), I run the test suite also on the reverse generator. More detail about the whole process can be found in this paper.
Beside BigCrush, I analyzed generators using a test for Hamming-weight dependencies
described in our paper. As I already remarked, our only
generator failing the test (but only after 5 TB of output) is xoroshiro128+
.
I report the period of each generator and its footprint in bits: a generator gives “bang-for-the-buck” if the base-2 logarithm of the period is close to the footprint. Note that the footprint has been always padded to a multiple of 64, and it can be significantly larger than expected because of padding and cyclic access indices.
PRNG | Footprint (bits) | Period | BigCrush Systematic Failures | HWD failure | ns/64 bits | cycles/B |
---|---|---|---|---|---|---|
xoroshiro128+ | 128 | 2128 − 1 | — | 5 TB | 0.80 | 0.36 |
xoroshiro128++ | 128 | 2128 − 1 | — | — | 0.90 | 0.40 |
xoroshiro128** | 128 | 2128 − 1 | — | — | 0.78 | 0.36 |
xoshiro256+ | 256 | 2256 − 1 | — | — | 0.61 | 0.27 |
xoshiro256++ | 256 | 2256 − 1 | — | — | 0.75 | 0.34 |
xoshiro256** | 256 | 2256 − 1 | — | — | 0.75 | 0.34 |
xoshiro512+ | 512 | 2512 − 1 | — | — | 0.68 | 0.30 |
xoshiro512++ | 512 | 2512 − 1 | — | — | 0.79 | 0.36 |
xoshiro512** | 512 | 2512 − 1 | — | — | 0.81 | 0.37 |
xoroshiro1024* | 1068 | 21024 − 1 | — | — | 0.82 | 0.37 |
xoroshiro1024++ | 1068 | 21024 − 1 | — | — | 1.01 | 0.46 |
xoroshiro1024** | 1068 | 21024 − 1 | — | — | 0.98 | 0.44 |
MWC128 | 128 | ≈2127 | — | — | 0.83 | 0.37 |
MWC192 | 192 | ≈2191 | — | — | 1.42 | 0.19 |
MWC256 | 256 | ≈2255 | — | — | 0.45 | 0.20 |
GMWC128 | 128 | ≈2127 | — | — | 1.84 | 0.83 |
GMWC256 | 256 | ≈2255 | — | — | 1.85 | 0.83 |
SFC64 | 256 | ≥264 | — | — | 0.66 | 0.30 |
SplitMix64 | 64 | 264 | — | — | 0.63 | 0.29 |
PCG 128 XSH RS 64 (LCG) | 128 | 2128 | — | — | 1.70 | 0.77 |
PCG64-DXSM (NumPy) | 128 | 2128 | — | — | 1.11 | 0.50 |
Ran | 192 | ≈2191 | — | — | 1.37 | 0.62 |
MT19937-64 (Mersenne Twister) | 20032 | 219937 − 1 | LinearComp | — | 1.36 | 0.62 |
SFMT19937 (uses SSE2 instructions) | 20032 | 219937 − 1 | LinearComp | — | 0.93 | 0.42 |
SFMT607 (uses SSE2 instructions) | 672 | 2607 − 1 | MatrixRank, LinearComp | 400 MB | 0.78 | 0.34 |
Tiny Mersenne Twister (64 bits) | 256 | 2127 − 1 | — | 90 TB→ | 2.76 | 1.25 |
Tiny Mersenne Twister (32 bits) | 224 | 2127 − 1 | CollisionOver, Run, SimPoker, AppearanceSpacings, MatrixRank, LinearComp, LongestHeadRun, Run of Bits (reversed) | 40 TB→ | 4.27 | 1.92 |
WELL512a | 544 | 2512 − 1 | MatrixRank, LinearComp | 3.5 PB | 5.42 | 2.44 |
WELL1024a | 1056 | 21024 − 1 | MatrixRank, LinearComp | — | 5.30 | 2.38 |
The following table compares instead two ways of generating floating-point numbers, namely the 521-bit dSFMT, which
generates directly floating-point numbers with 52 significant bits, and
xoshiro256+
followed by a standard conversion of its upper bits to a floating-point number with 53 significant bits (see below).
PRNG | Footprint (bits) | Period | BigCrush Systematic Failures | HWD failure | ns/double | cycles/B |
---|---|---|---|---|---|---|
xoshiro256+ (returns 53 significant bits) | 256 | 2256 − 1 | — | — | 0.92 | 3.40 |
dSFMT (uses SSE2 instructions, returns only 52 significant bits) | 704 | 2521 − 1 | MatrixRank, LinearComp | 6 TB | 0.85 | 3.07 |
xoshiro256+
is ≈8% slower than the dSFMT, but it has a doubled range of output values, does not need any extra SSE instruction (can be programmed in Java, etc.),
has a much smaller footprint, and its upper bits do not fail any test.
Remarks
Vectorization
Some of the generators can be very easily vectorized, so that multiple instances can be run in parallel to provide
fast bulk generation. Thanks to an interesting discussion with the Julia developers,
I've become aware that AVX2 vectorizations of multiple instances of generators using the +
/++
scrambler are impressively fast (links
below point at a speed test to be used with the harness, and the result will be multiplied by 1000):
PRNG | ns/64 bits | cycles/B |
---|---|---|
xoroshiro128+ (4 parallel instances) | 0.36 | 0.14 |
xoroshiro128++ (4 parallel instances) | 0.45 | 0.18 |
xoshiro256+ (8 parallel instances) | 0.19 | 0.08 |
xoshiro256++ (8 parallel instances) | 0.26 | 0.09 |
Note that sometimes convincing the compiler to vectorize is a
slightly quirky process: for example, on gcc
12.2.1 I have to use -O3 -fdisable-tree-cunrolli -march=native
to vectorize xoshiro256
-based generators
(-O3
alone will not vectorize; thanks to to Chris Elrod for pointing me at -fdisable-tree-cunrolli
).
A long period does not imply high quality
This is a common misconception. The generator x++
has
period \(2^k\), for any \(k\geq0\), provided that x
is
represented using \(k\) bits: nonetheless, it is a horrible generator.
The generator returning \(k-1\) zeroes followed by a one has period
\(k\).
It is however important that the period is long enough. A first heuristic rule of thumb is that if you need to use \(t\) values, you need a generator with period at least \(t^2\). Moreover, if you run \(n\) independent computations starting at random seeds, the sequences used by each computation should not overlap.
Now, given a generator with period \(P\), the probability that \(n\) subsequences of length \(L\) starting at random points in the state space overlap is bounded by \(n^2L/P\). If your generator has period \(2^{256}\) and you run on \(2^{64}\) cores (you will never have them) a computation using \(2^{64}\) pseudorandom numbers (you will never have the time) the probability of overlap would be less than \(2^{-64}\).
In other words: any generator with a period beyond \(2^{256}\) has a period that is sufficient for every imaginable application. Unless there are other motivations (e.g., provably increased quality), a generator with a larger period is only a waste of memory (as it needs a larger state), of cache lines, and of precious high-entropy random bits for seeding (unless you're using small seeds, but then it's not clear why you would want a very long period in the first place—the computation above is valid only if you seed all bits of the state with independent, uniformly distributed random bits).
In case the generator provides a jump function that lets you skip through chunks of the output in constant time, even a period of \(2^{128}\) can be sufficient, as it provides \(2^{64}\) non-overlapping sequences of length \(2^{64}\).
Equidistribution
Every 64-bit generator of ours with n bits of state scrambled
with *
or **
is n/64-dimensionally
equidistributed: every n/64-tuple of consecutive 64-bit
values appears exactly once in the output, except for the zero tuple
(and this is the largest possible dimension). Generators based on the
+
or ++
scramblers are however only (n/64 −
1)-dimensionally equidistributed: every (n/64 −
1)-tuple of consecutive 64-bit values appears exactly 264
times in the output, except for a missing zero tuple. The same considerations
apply to 32-bit generators.
Generating uniform doubles in the unit interval
A standard double (64-bit) floating-point number in IEEE floating point format has 52 bits of significand, plus an implicit bit at the left of the significand. Thus, the representation can actually store numbers with 53 significant binary digits.
Because of this fact, in C99 a 64-bit unsigned integer x
should be converted to a 64-bit double
using the expression
#include <stdint.h> (x >> 11) * 0x1.0p-53
In Java you can use almost the same expression for a (signed) 64-bit integer:
(x >>> 11) * 0x1.0p-53
This conversion guarantees that all dyadic rationals of the form k / 2−53
will be equally likely. Note that this conversion prefers the high bits of x
(usually, a good idea), but you can alternatively
use the lowest bits.
An alternative, multiplication-free conversion is
#include <stdint.h> static inline double to_double(uint64_t x) { const union { uint64_t i; double d; } u = { .i = UINT64_C(0x3FF) << 52 | x >> 12 }; return u.d - 1.0; }
The code above cooks up by bit manipulation
a real number in the interval [1..2), and then subtracts
one to obtain a real number in the interval [0..1). If x
is chosen uniformly among 64-bit integers,
d
is chosen uniformly among dyadic rationals of the form k / 2−52. This
is the same technique used by generators providing directly doubles, such as the
dSFMT.
This technique is supposed to be fast, but on recent hardare it does not seem to give a significant advantage. More importantly, you will be generating half the values you could actually generate. The same problem plagues the dSFMT. All doubles generated will have the lowest significand bit set to zero (I must thank Raimo Niskanen from the Erlang team for making me notice this—a previous version of this site did not mention this issue).
In Java you can obtain an analogous result using suitable static methods:
Double.longBitsToDouble(0x3FFL << 52 | x >>> 12) - 1.0
To adhere to the principle of least surprise, my implementations now use the multiplicative version, everywhere.
Interestingly, these are not the only notions of “uniformity” you can come up with. Another possibility is that of generating 1074-bit integers, normalize and return the nearest value representable as a 64-bit double (this is the theory—in practice, you will almost never use more than two integers per double as the remaining bits would not be representable). This approach guarantees that all representable doubles could be in principle generated, albeit not every returned double will appear with the same probability. A reference implementation can be found here. Note that unless your generator has at least 1074 bits of state and suitable equidistribution properties, the code above will not do what you expect (e.g., it might never return zero).