Four tests fail on single-CPU machines
Hello. On AWS machines of type m7a.medium and r7a.medium, which incidentally have a single CPU, the Debian package for version 0.11.1 always fail to build:
=================================== FAILURES ===================================
____________________________ test_debounced_leading ____________________________
def test_debounced_leading() -> None:
mock1 = Mock()
f1 = debounced(mock1, timeout=10, leading=True)
f2 = Mock()
for _ in range(10):
f1()
f2()
time.sleep(0.1)
> assert mock1.call_count == 2
E AssertionError: assert 1 == 2
E + where 1 = <Mock id='139801139221504'>.call_count
tests/test_throttler.py:36: AssertionError
________________________________ test_throttled ________________________________
def test_throttled() -> None:
mock1 = Mock()
f1 = throttled(mock1, timeout=10, leading=True)
f2 = Mock()
for _ in range(10):
f1()
f2()
time.sleep(0.1)
> assert mock1.call_count == 2
E AssertionError: assert 0 == 2
E + where 0 = <Mock id='139801140996688'>.call_count
tests/test_throttler.py:50: AssertionError
___________________________ test_throttled_trailing ____________________________
def test_throttled_trailing() -> None:
mock1 = Mock()
f1 = throttled(mock1, timeout=10, leading=False)
f2 = Mock()
for _ in range(10):
f1()
f2()
time.sleep(0.1)
> assert mock1.call_count == 1
E AssertionError: assert 0 == 1
E + where 0 = <Mock id='139801141000048'>.call_count
tests/test_throttler.py:64: AssertionError
________________ test_throttled_debounced_signature[throttled] _________________
deco = <function throttled at 0x7f25fef3ce00>
@pytest.mark.parametrize("deco", [debounced, throttled])
def test_throttled_debounced_signature(deco: Callable) -> None:
mock = Mock()
@deco(timeout=0, leading=True)
def f1(x: int) -> None:
"""Doc."""
mock(x)
# make sure we can still inspect the signature
assert signature(f1).parameters["x"] == Parameter(
"x", Parameter.POSITIONAL_OR_KEYWORD, annotation=int
)
# make sure these are connectable
sig = SignalInstance((int, int, int))
sig.connect(f1)
sig.emit(1, 2, 3)
> mock.assert_called_once_with(1)
tests/test_throttler.py:106:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Mock id='139801139219152'>, args = (1,), kwargs = {}
msg = "Expected 'mock' to be called once. Called 0 times."
def assert_called_once_with(self, /, *args, **kwargs):
"""assert that the mock was called exactly once and that that call was
with the specified arguments."""
if not self.call_count == 1:
msg = ("Expected '%s' to be called once. Called %s times.%s"
% (self._mock_name or 'mock',
self.call_count,
self._calls_repr()))
> raise AssertionError(msg)
E AssertionError: Expected 'mock' to be called once. Called 0 times.
/usr/lib/python3.13/unittest/mock.py:990: AssertionError
=========================== short test summary info ============================
FAILED tests/test_throttler.py::test_debounced_leading - AssertionError: asse...
FAILED tests/test_throttler.py::test_throttled - AssertionError: assert 0 == 2
FAILED tests/test_throttler.py::test_throttled_trailing - AssertionError: ass...
FAILED tests/test_throttler.py::test_throttled_debounced_signature[throttled]
================== 4 failed, 570 passed, 21 skipped in 1.62s ===================
Is this a bug in the tests, or it is a bug in the code? Should I use os.cpu_count() == 1 inside a skipif to skip them?
Thanks.
Edit: At least once this test also failed for me on a machine with 2 vCPUs:
FAILED tests/test_throttler.py::test_throttled_trailing - AssertionError: ass...
May you check what happened it you increase sleep to 0.2? I expect that it could be a problem with the process scheduler.
Using 0.2 does not fix the problem It still fails.
I forgot: To reproduce please try booting with GRUB_CMDLINE_LINUX="nr_cpus=1". If that does not help to reproduce the problem, I can offer a VM to test (I'm reachable at debian.org).
Thanks.
hm. I need to think. We meet this class of problem on github CI. But most often restart of CI helps.
I think that the problem is on some server machines, the thread switch is not called enough frequently. I still do not found way to properly workaround it..
Hey guys, any updates here? Is there anything we can likely do here?