nidaqmx-python
nidaqmx-python copied to clipboard
Task.read interprets data incorrectly for short reads
When DAQmxReadAnalogF64
with DAQmx_Val_GroupByChannel
returns fewer samples than requested, it squashes the valid samples together at the beginning of the buffer. The way that nidaqmx.Task.read
handles short reads doesn't take this into account, so it may return samples that have been overwritten.
Test case (from tests/component/test_task_read_ai.py
, under development):
def test___analog_multi_channel_finite___read_too_many_sample___returns_valid_2d_channels_samples_truncated(
ai_multi_channel_task: nidaqmx.Task,
) -> None:
samples_to_acquire = 5
ai_multi_channel_task.timing.cfg_samp_clk_timing(rate=1000.0, sample_mode=AcquisitionType.FINITE, samps_per_chan=samples_to_acquire)
num_channels = ai_multi_channel_task.number_of_channels
samples_to_read = 10
data = ai_multi_channel_task.read(samples_to_read)
expected = [
[_get_voltage_offset_for_chan(chan_index) for _ in range(samples_to_acquire)]
for chan_index in range(num_channels)
]
_assert_equal_2d(data, expected, abs=VOLTAGE_EPSILON)
Result (lib and grpc fail the same way, because the underlying interpreters have consistent behavior here):
_ test___analog_multi_channel_finite___read_too_many_sample___returns_valid_2d_channels_samples_truncated[library_init_kwargs] _
ai_multi_channel_task = Task(name=_unnamedTask<0>)
def test___analog_multi_channel_finite___read_too_many_sample___returns_valid_2d_channels_samples_truncated(
ai_multi_channel_task: nidaqmx.Task,
) -> None:
samples_to_acquire = 5
ai_multi_channel_task.timing.cfg_samp_clk_timing(rate=1000.0, sample_mode=AcquisitionType.FINITE, samps_per_chan=samples_to_acquire)
num_channels = ai_multi_channel_task.number_of_channels
samples_to_read = 10
data = ai_multi_channel_task.read(samples_to_read)
expected = [
[_get_voltage_offset_for_chan(chan_index) for _ in range(samples_to_acquire)]
for chan_index in range(num_channels)
]
> _assert_equal_2d(data, expected, abs=VOLTAGE_EPSILON)
tests\component\test_task_read_ai.py:144:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
data = [[1.000091555528428, 1.000396740623188, 1.000396740623188, 0.999786370433668, 0.999786370433668], [3.000274666585284, ...4, 2.999969481490524], [3.000274666585284, 3.000579851680044, 3.000579851680044, 2.999969481490524, 2.999969481490524]]
expected = [[1.0, 1.0, 1.0, 1.0, 1.0], [2.0, 2.0, 2.0, 2.0, 2.0], [3.0, 3.0, 3.0, 3.0, 3.0]], abs = 0.001
def _assert_equal_2d(data: List[List[float]], expected: List[List[float]], abs: float) -> None:
# pytest.approx() does not support nested data structures.
assert len(data) == len(expected)
for i in range(len(data)):
> assert data[i] == pytest.approx(expected[i], abs=abs)
E assert [3.000274666585284, 3.000579851680044, 3.000579851680044, 2.999969481490524, 2.999969481490524] == approx([2.0 ± 1.0e-03, 2.0 ± 1.0e-03, 2.0 ± 1.0e-03, 2.0 ± 1.0e-03, 2.0 ± 1.0e-03])
E
E comparison failed. Mismatched elements: 5 / 5:
E Max absolute difference: 1.000579851680044
E Max relative difference: 0.3334621643612693
E Index | Obtained | Expected
E 0 | 3.000274666585284 | 2.0 ± 1.0e-03
E 1 | 3.000579851680044 | 2.0 ± 1.0e-03
E 2 | 3.000579851680044 | 2.0 ± 1.0e-03
E 3 | 2.999969481490524 | 2.0 ± 1.0e-03
E 4 | 2.999969481490524 | 2.0 ± 1.0e-03
tests\component\test_task_read_ai.py:151: AssertionError
NI IO Trace shows:
The original array was [1,1,1,1,1,0,0,0,0,0,2,2,2,2,2,0,0,0,0,0,3,3,3,3,3,0,0,0,0,0] then it was squashed to [1,1,1,1,1,2,2,2,2,2,3,3,3,3,3,x,x,x,x,x,x,x,x,x,x,x,x,x,x,x] When this happened, the old 2s were overwritten with 3s.
This test case passes if you increase the number of samples to acquire because a larger read buffer prevents the old/new sample positions from overlapping.
FYI, DAQmx_Val_GroupByScanNumber
does not have this data squashing/shifting behavior.
With DAQmx_Val_GroupByScanNumber
, the original array would be:
[1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
Truncating the number of samples would not move any of the samples.
NumPy can use this data format without transposing the array indices if you specify Fortran-contiguous order instead of C-contiguous order, but nidaqmx-python doesn't currently support this.