[dev-server] JS files containing UTF-8 wide characters are unsupported
The getResponseBody function currently tries to convert any transformed module response body from a stream/buffer to a string. It does this via the use of the isbinaryfile module which performs an inspection of the buffer contents to check if the buffer is a binary file or not.
This works in most cases, but in cases where a file contains UTF-8 characters but does not have a UTF-8 Byte-Order-Mark it concludes that the file is binary. Without the BOM the response stream is not converted to string, and thus we get a TypeError in the dev servers transformModuleImportsPlugin caused by the ES lexer failing to access the characters in the buffer.
TypeError: A.charCodeAt is not a function
This unfortunately rears its ugly head when using recent versions of @formatjs/ecma402-abstract which in fixing this issue created this problem by having their digit-mapping.generated.js file contain UTF-8 wide characters but not having a BOM in the file.
Now while its technically correct that UTF-8 files can contain a BOM, I don't think this should be the differentiator for checking if the file should be converted to a string.
I would suggest that we add an additional check here that if the imported path file extension is .js then we assume that the file is not binary and perform the buffer->string conversion regardless of the result from isbinaryfile.