# UTF-16 to UTF-8
> Convert a [UTF-16][utf-16] encoded string to an array of integers using [UTF-8][utf-8] encoding.
## Usage
```javascript
var utf16ToUTF8Array = require( '@stdlib/string/utf16-to-utf8-array' );
```
#### utf16ToUTF8Array( str )
Converts a [UTF-16][utf-16] encoded string to an `array` of integers using [UTF-8][utf-8] encoding.
```javascript
var out = utf16ToUTF8Array( '☃' );
// returns [ 226, 152, 131 ]
```
## Notes
- [UTF-16][utf-16] encoding uses one 16-bit unit for non-surrogates (`U+0000` to `U+D7FF` and `U+E000` to `U+FFFF`).
- [UTF-16][utf-16] encoding uses two 16-bit units (surrogate pairs) for `U+10000` to `U+10FFFF` and encodes `U+10000-U+10FFFF` by subtracting `0x10000` from the code point, expressing the result as a 20-bit binary, and splitting the 20 bits of `0x0-0xFFFFF` as upper and lower 10-bits. The respective 10-bits are stored in two 16-bit words: a **high** and a **low** surrogate.
- [UTF-8][utf-8] is defined to encode code points in one to four bytes, depending on the number of significant bits in the numerical value of the code point. Encoding uses the following byte sequences:
```text
0x00000000 - 0x0000007F:
0xxxxxxx
0x00000080 - 0x000007FF:
110xxxxx 10xxxxxx
0x00000800 - 0x0000FFFF:
1110xxxx 10xxxxxx 10xxxxxx
0x00010000 - 0x001FFFFF:
11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
```
where an `x` represents a code point bit. Only the shortest possible multi-byte sequence which can represent a code point is used.
## Examples
```javascript
var utf16ToUTF8Array = require( '@stdlib/string/utf16-to-utf8-array' );
var values;
var out;
var i;
values = [
'Ladies + Gentlemen',
'An encoded string!',
'Dogs, Cats & Mice',
'☃',
'æ',
'𐐷'
];
for ( i = 0; i < values.length; i++ ) {
out = utf16ToUTF8Array( values[ i ] );
console.log( '%s: %s', values[ i ], out.join( ',' ) );
}
```
[utf-8]: https://en.wikipedia.org/wiki/UTF-8
[utf-16]: https://en.wikipedia.org/wiki/UTF-16