`readFloat64` does not work correctly in LE mode
The readFloat64 method reads 2 uint32s into _scratch, and each of those uint32s flip their byte order when in LE mode, but it doesn't properly swap the entire 8-byte sequence
Repro:
import { BitStream } from 'bit-buffer';
const bytes = Buffer.from([ 0x7a, 0x36, 0xab, 0x3e, 0x57, 0x5b, 0xb1, 0xbf ]);
const stream1 = new BitStream(bytes.buffer as ArrayBuffer, bytes.byteOffset, bytes.byteLength);
const stream2 = new BitStream(bytes.buffer as ArrayBuffer, bytes.byteOffset, bytes.byteLength);
stream1.bigEndian = false;
stream2.bigEndian = true;
console.log(stream1.readFloat64()); // 8.110049515380524e-7 - Wrong, should be -0.0678....
console.log(stream2.readFloat64()); // 5.143595480303752e+280 - Correct
I'm working around this using the following:
// Reading
const bytes = new Uint8Array(8);
for (let i = 0; i < 8; i++) {
bytes[i] = stream.readUint8();
}
const view = new DataView(bytes.buffer);
const value = view.getFloat64(0, stream.bigEndian === true);
// Writing
const view = new DataView(new ArrayBuffer(8));
view.setFloat64(0, value, stream.bigEndian === true);
for (let i = 0; i < 8; i++) {
stream.writeUint8(view.getUint8(i));
}
which should be usable upstream too as a real fix
Sorry, was this supposed to say in big-endian mode (instead of in little-endian mode)?
For big-endian mode, it seems changing https://github.com/inolen/bit-buffer/blob/master/bit-buffer.js#L166 to:
getFloat64 (offset) {
BitView.#scratchU32[!this.#bigEndian] = this.getUint32(offset);
BitView.#scratchU32[this.#bigEndian] = this.getUint32(offset + 32);
return BitView.#scratchF64[0];
}
should do the trick.
No, it's little-endian mode that has the issue. Here is another example:
const { BitStream } = require('bit-buffer');
const buffer1 = Buffer.alloc(8);
const buffer2 = Buffer.alloc(8);
buffer1.writeDoubleLE(1.0, 0);
buffer2.writeDoubleBE(1.0, 0);
const stream1 = new BitStream(buffer1.buffer, buffer1.byteOffset, buffer1.byteLength);
const stream2 = new BitStream(buffer2.buffer, buffer2.byteOffset, buffer2.byteLength);
stream1.bigEndian = false;
stream2.bigEndian = true;
console.log(stream1.readFloat64()); // 5.299808824e-315
console.log(stream2.readFloat64()); // 1
Both streams should be returning 1, but only the stream in big-endian mode is correct. stream1 has bigEndian set to false, and it does not return the expected value
At leats that's how it is in the latest npm release, I just tested this on a fresh install. I'm unsure if your latest changes fix the issue since they don't seem to be on npm yet
As of the latest npm release, changing:
BitView.prototype.getFloat64 = function (offset) {
BitView._scratch.setUint32(0, this.getUint32(offset));
// DataView offset is in bytes.
BitView._scratch.setUint32(4, this.getUint32(offset+32));
return BitView._scratch.getFloat64(0);
};
to something like:
BitView.prototype.getFloat64 = function (offset) {
if (this.bigEndian) {
BitView._scratch.setUint32(0, this.getUint32(offset));
BitView._scratch.setUint32(4, this.getUint32(offset + 32));
} else {
BitView._scratch.setUint32(0, this.getUint32(offset + 32));
BitView._scratch.setUint32(4, this.getUint32(offset));
}
return BitView._scratch.getFloat64(0);
};
to properly swap the bytes fixes the issue, and both streams return the expected value
I'm sure writing also has the same issue, but I didn't check personally
What machine are you running this on? It feels like you're running this on a BE host.
I'm running this on my brand new Apple Silicon 2024 Mac Mini, which is little-endian, and I have not had any other issues with endianness in any other tools/libraries. I'm sorry to ask, but have you tested my repros? They show the issue
You can also clearly see the issue by checking the contents of _scratch and comparing it to the actual buffer contents
I modified the original getFloat64 to log the contents of _scratch like so:
BitView.prototype.getFloat64 = function (offset) {
BitView._scratch.setUint32(0, this.getUint32(offset));
BitView._scratch.setUint32(4, this.getUint32(offset+32));
console.log(`BitView._scratch (${this.bigEndian ? 'BE' : 'LE' })`, Buffer.from(BitView._scratch.buffer));
return BitView._scratch.getFloat64(0);
};
and then updated my most recent repro to also log the original buffer contents:
const { BitStream } = require('bit-buffer');
const buffer1 = Buffer.alloc(8);
const buffer2 = Buffer.alloc(8);
buffer1.writeDoubleLE(1.0, 0);
buffer2.writeDoubleBE(1.0, 0);
console.log('buffer1', buffer1); // buffer1 <Buffer 00 00 00 00 00 00 f0 3f>
console.log('buffer2', buffer2); // buffer2 <Buffer 3f f0 00 00 00 00 00 00>
const stream1 = new BitStream(buffer1.buffer, buffer1.byteOffset, buffer1.byteLength);
const stream2 = new BitStream(buffer2.buffer, buffer2.byteOffset, buffer2.byteLength);
stream1.bigEndian = false;
stream2.bigEndian = true;
console.log(stream1.readFloat64()); // BitView._scratch (LE) <Buffer 00 00 00 00 3f f0 00 00>, 5.299808824e-315
console.log(stream2.readFloat64()); // BitView._scratch (BE) <Buffer 3f f0 00 00 00 00 00 00>, 1
buffer1, which was written with little-endian, contains <Buffer 00 00 00 00 00 00 f0 3f>, however _scratch contains <Buffer 00 00 00 00 3f f0 00 00>. Which is exactly the issue I originally described:
each of those uint32s flip their byte order when in LE mode, but it doesn't properly swap the entire 8-byte sequence
_scratch is reading 2 separate uint32 values and those uint32 values are having their byte order swapped based on bigEndian, but it's the entire 8 byte sequence that needs the swap. The data in _scratch does not match either the original buffers because byte order swapping is happening at the wrong place
If you add the same logging to the updated function I sent before:
BitView.prototype.getFloat64 = function (offset) {
if (this.bigEndian) {
BitView._scratch.setUint32(0, this.getUint32(offset));
BitView._scratch.setUint32(4, this.getUint32(offset + 32));
} else {
BitView._scratch.setUint32(0, this.getUint32(offset + 32));
BitView._scratch.setUint32(4, this.getUint32(offset));
}
console.log(`BitView._scratch (${this.bigEndian ? 'BE' : 'LE' })`, Buffer.from(BitView._scratch.buffer));
return BitView._scratch.getFloat64(0);
};
You now get:
const { BitStream } = require('bit-buffer');
const buffer1 = Buffer.alloc(8);
const buffer2 = Buffer.alloc(8);
buffer1.writeDoubleLE(1.0, 0);
buffer2.writeDoubleBE(1.0, 0);
console.log('buffer1', buffer1); // buffer1 <Buffer 00 00 00 00 00 00 f0 3f>
console.log('buffer2', buffer2); // buffer2 <Buffer 3f f0 00 00 00 00 00 00>
const stream1 = new BitStream(buffer1.buffer, buffer1.byteOffset, buffer1.byteLength);
const stream2 = new BitStream(buffer2.buffer, buffer2.byteOffset, buffer2.byteLength);
stream1.bigEndian = false;
stream2.bigEndian = true;
console.log(stream1.readFloat64()); // BitView._scratch (LE) <Buffer 3f f0 00 00 00 00 00 00>, 1
console.log(stream2.readFloat64()); // BitView._scratch (BE) <Buffer 3f f0 00 00 00 00 00 00>, 1
Which lines up exactly with the expected data because the entire 8 byte sequence has its bytes swapped (LE is converted to BE since return BitView._scratch.getFloat64(0); is always big-endian)
alternatively, the work-around I mentioned I'm using in my original message also works as a real fix, with the addition of using the bigEndian setting directly so the byte order doesn't need to be manually swapped at all:
BitView.prototype.getFloat64 = function (offset) {
for (let i = 0; i < 8; i++) {
BitView._scratch.setUint8(i, this.getUint8(offset+(i*8)));
}
return BitView._scratch.getFloat64(0, !this.bigEndian);
};
this version returns the expected results for both endian modes
Quick question, can you clone the repo and run npm install / npm run test and let me know if they succeed or not?
On Fri, Oct 31, 2025, 2:21 PM Jonathan Barrow @.***> wrote:
jonbarrow left a comment (inolen/bit-buffer#31) https://github.com/inolen/bit-buffer/issues/31#issuecomment-3473066811
I'm running this on my brand new Apple Silicon 2024 Mac Mini, and I have not had any other issues with endianness in any other tools/libraries. I'm sorry to ask, but have you tested my repros? They show the issue
You can also clearly see the issue by checking the contents of _scratch and comparing it to the actual buffer contents
I modified the original getFloat64 to log the contents of _scratch like so:
BitView.prototype.getFloat64 = function (offset) { BitView._scratch.setUint32(0, this.getUint32(offset)); BitView._scratch.setUint32(4, this.getUint32(offset+32)); console.log(
BitView._scratch (${this.bigEndian ? 'BE' : 'LE' }), Buffer.from(BitView._scratch.buffer)); return BitView._scratch.getFloat64(0);};and then updated my most recent repro to also log the original buffer contents:
const { BitStream } = require('bit-buffer'); const buffer1 = Buffer.alloc(8);const buffer2 = Buffer.alloc(8); buffer1.writeDoubleLE(1.0, 0);buffer2.writeDoubleBE(1.0, 0); console.log('buffer1', buffer1); // buffer1 <Buffer 00 00 00 00 00 00 f0 3f>console.log('buffer2', buffer2); // buffer2 <Buffer 3f f0 00 00 00 00 00 00> const stream1 = new BitStream(buffer1.buffer, buffer1.byteOffset, buffer1.byteLength);const stream2 = new BitStream(buffer2.buffer, buffer2.byteOffset, buffer2.byteLength); stream1.bigEndian = false;stream2.bigEndian = true; console.log(stream1.readFloat64()); // BitView._scratch (LE) <Buffer 00 00 00 00 3f f0 00 00>, 5.299808824e-315console.log(stream2.readFloat64()); // BitView._scratch (BE) <Buffer 3f f0 00 00 00 00 00 00>, 1
buffer1, which was written with little-endian, contains <Buffer 00 00 00 00 00 00 f0 3f>, however _scratch contains <Buffer 00 00 00 00 3f f0 00 00>. Which is exactly the issue I originally described:
each of those uint32s flip their byte order when in LE mode, but it doesn't properly swap the entire 8-byte sequence
_scratch is reading 2 separate uint32 values and those uint32 values are having their byte order swapped based on bigEndian, but it's the entire 8 byte sequence that needs the swap. The data in _scratch does not match either the original buffers because byte order swapping is happening at the wrong place
If you add the same logging to the updated function I sent before:
BitView.prototype.getFloat64 = function (offset) { if (this.bigEndian) { BitView._scratch.setUint32(0, this.getUint32(offset)); BitView._scratch.setUint32(4, this.getUint32(offset + 32)); } else { BitView._scratch.setUint32(0, this.getUint32(offset + 32)); BitView._scratch.setUint32(4, this.getUint32(offset)); } console.log(
BitView._scratch (${this.bigEndian ? 'BE' : 'LE' }), Buffer.from(BitView._scratch.buffer)); return BitView._scratch.getFloat64(0);};You now get:
const { BitStream } = require('bit-buffer'); const buffer1 = Buffer.alloc(8);const buffer2 = Buffer.alloc(8); buffer1.writeDoubleLE(1.0, 0);buffer2.writeDoubleBE(1.0, 0); console.log('buffer1', buffer1); // buffer1 <Buffer 00 00 00 00 00 00 f0 3f>console.log('buffer2', buffer2); // buffer2 <Buffer 3f f0 00 00 00 00 00 00> const stream1 = new BitStream(buffer1.buffer, buffer1.byteOffset, buffer1.byteLength);const stream2 = new BitStream(buffer2.buffer, buffer2.byteOffset, buffer2.byteLength); stream1.bigEndian = false;stream2.bigEndian = true; console.log(stream1.readFloat64()); // BitView._scratch (LE) <Buffer 3f f0 00 00 00 00 00 00>, 1console.log(stream2.readFloat64()); // BitView._scratch (BE) <Buffer 3f f0 00 00 00 00 00 00>, 1
Which lines up exactly with the expected data because the entire 8 byte sequence has its bytes swapped (LE is converted to BE since return BitView._scratch.getFloat64(0); is always big-endian)
— Reply to this email directly, view it on GitHub https://github.com/inolen/bit-buffer/issues/31#issuecomment-3473066811, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADQMCGLVNDVI4BTA4VG4KL32NO4ZAVCNFSM6AAAAACIZNNDOSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTINZTGA3DMOBRGE . You are receiving this because you commented.Message ID: @.***>
The tests all pass in the latest commit, but copying my repro to use said latest commit still fails to return the expected value. It has the same behavior as the latest npm release.
I'm sorry to ask again, but have you tried my repros? Can you confirm whether or not you can replicate the issue using the examples I've given? From my perspective it seems pretty clear where the issue is at and I've given a number of working fixes. So I'm trying to confirm if you have been able to reproduce the issue, and if so whether or not the suggested fixes I've given have also fixed said issue for you
I've double checked the tests and despite the fact that they pass, the tests clearly have some issues. I've only checked the tests relevant to float64 but I suspect other tests also have issues:
test('Min / max float64 (normal values)', function () {
const buffer = new ArrayBuffer(16);
const u32 = new Uint32Array(buffer);
u32[0] = 0x00100000;
u32[1] = 0x00000000;
u32[2] = 0x7fefffff;
u32[3] = 0xffffffff;
const f64 = new Float64Array(buffer);
const min = f64[0];
const max = f64[1];
bsw.writeFloat64(min);
bsw.writeFloat64(max);
assert.equal(bsr.readFloat64(), min);
assert.equal(bsr.readFloat64(), max);
});
Notably:
- The
minandmaxvalues seem to be incorrect? I assume they are supposed to represent the min/max float64 values? However when actually checking what the values are,minis5.180654e-318andmaxisNaN - The tests do not actually have full coverage of all the various methods in both endian modes
So I'm not sure relying on the tests passing is a reliable metric right now
Hey, I absolutely did take a look at the example, thanks for taking the time to put that together.
The problem was that I had forgot about the behavior of DataView (I hadn't touched that code in many years) and when I ran your example I got the exact opposite result of you - because I was using the newer code using the TypedArrays instead. At first I thought perhaps you were running a BE host.
I submitted a patch (basically what I posted https://github.com/inolen/bit-buffer/issues/31#issuecomment-3470492523) that should I think resolve your issue now.
Sounds good 👍 sorry if my asking sounded accusatory, it wasn't meant to be. I was just trying to get on the same page and confirm whether or not we're both seeing the issue
I'll check the patch in a few hours and report back, thank you for the upkeep
Confirmed that the latest commit does in fact fix things 👍 my test now shows the expected values, thank you very much for taking the time to fix this
I also double checked the tests and min and max seem to be correct in the float64 example 👍
If I could make a suggestion though, I think switching from assert.equal to assert(a === b) in basic number comparisions might be better, since that would have caught the NaN issue and would have shown this bug sooner. Neither assert.equal nor assert.strictEqual will catch NaN here, since assert.equal has a special case to always allow a test to pass if either input is NaN and assert.strictEqual just uses Object.is under the hood which will also pass if both inputs are NaN. But NaN === NaN will always be false, so using assert(a === b) where either/both values are NaN would have failed the test and revealed the bug (which is ideal, since NaN is not valid here)