bladeRF icon indicating copy to clipboard operation
bladeRF copied to clipboard

Enforce 12 bit range to prevent silent integer overflow

Open warnes opened this issue 2 years ago • 2 comments

The current bladeRF driver / FPGA code doesn't prevent or detect overflow of the 12-bit ADC.

The 12-bit DAC seems to accepts the range [-2048, +2047], and the current driver/FPGA code doesn’t range check the input. When provided complex floating-point values, it (appears) to convert them to 16-bit signed integer values by multiplying by 2048, and then dropping the top 4 bits, silently converting 1.0 to +2048, which overflowings to a negative value, yielding very strange RF results.

As a workaround, in my code I explicitly convert from complex I/Q values to 16-bit int values, and enforce the range limit, but it would be much friendlier for the driver/FPGA code performed this task.

One solution is to have the driver/FGA code apply the [-2048,2047] threshold and (ideally) generate a warning to the user.

FWIW, the documentation for Ettus Research's (now Analog Devices) devices indicates they use the most significant 12 bits of the int16's, so complex data is scaled by 2^15, and the lowest four bits are dropped when feeding the ADC. I suspect this approach leverages standard CPU hardware detection of integer overflow.

warnes avatar Apr 20 '22 16:04 warnes

I use the higher bits for synchronization of my T/R switch -- I have extended the internal FPGA FIFO to 13bit and mapped the new bits to Expansion Header and now I have TTL signals that I can trigger in-sync with the samples I'm transmitting by simply setting the 13th bit to 1 (they are actually offset by a few samples because of the filtering and processing in the RFIC -- I have calibrated this delay).

Therefore, if you are implementing cutting of these bits, I wish for this to be configurable :)

jenda122 avatar Apr 20 '22 22:04 jenda122

FWIW, the documentation for Ettus Research's (now Analog Devices) devices indicates they use the most significant 12 bits of the int16's, so complex data is scaled by 2^15, and the lowest four bits are dropped when feeding the ADC. I suspect this approach leverages standard CPU hardware detection of integer overflow.

That makes sense - looking at the way the Volk kernel that performs the float->int conversion is implemented, doing that would saturate the output into [SHRT_MIN, SHRT_MAX] and avoid this issue, while still retaining good vector performance.

nhw76 avatar Apr 25 '22 18:04 nhw76