3.8 KiB
UTF-16 to UTF-8
Convert a UTF-16 encoded string to an array of integers using UTF-8 encoding.
Usage
var utf16ToUTF8Array = require( '@stdlib/string/utf16-to-utf8-array' );
utf16ToUTF8Array( str )
Converts a UTF-16 encoded string to an array
of integers using UTF-8 encoding.
var out = utf16ToUTF8Array( '☃' );
// returns [ 226, 152, 131 ]
Notes
-
UTF-16 encoding uses one 16-bit unit for non-surrogates (
U+0000
toU+D7FF
andU+E000
toU+FFFF
). -
UTF-16 encoding uses two 16-bit units (surrogate pairs) for
U+10000
toU+10FFFF
and encodesU+10000-U+10FFFF
by subtracting0x10000
from the code point, expressing the result as a 20-bit binary, and splitting the 20 bits of0x0-0xFFFFF
as upper and lower 10-bits. The respective 10-bits are stored in two 16-bit words: a high and a low surrogate. -
UTF-8 is defined to encode code points in one to four bytes, depending on the number of significant bits in the numerical value of the code point. Encoding uses the following byte sequences:
0x00000000 - 0x0000007F: 0xxxxxxx 0x00000080 - 0x000007FF: 110xxxxx 10xxxxxx 0x00000800 - 0x0000FFFF: 1110xxxx 10xxxxxx 10xxxxxx 0x00010000 - 0x001FFFFF: 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
where an
x
represents a code point bit. Only the shortest possible multi-byte sequence which can represent a code point is used.
Examples
var utf16ToUTF8Array = require( '@stdlib/string/utf16-to-utf8-array' );
var values;
var out;
var i;
values = [
'Ladies + Gentlemen',
'An encoded string!',
'Dogs, Cats & Mice',
'☃',
'æ',
'𐐷'
];
for ( i = 0; i < values.length; i++ ) {
out = utf16ToUTF8Array( values[ i ] );
console.log( '%s: %s', values[ i ], out.join( ',' ) );
}