parsec-cloud icon indicating copy to clipboard operation
parsec-cloud copied to clipboard

Flaky test `tests/test_time_provider.py::test_sleep_with_mock`

Open FirelightFlagboy opened this issue 3 years ago • 0 comments

================================== FAILURES ===================================
____________________________ test_sleep_with_mock _____________________________
[gw0] win32 -- Python 3.9.13 C:\Users\runneradmin\AppData\Local\pypoetry\Cache\virtualenvs\parsec-cloud-_mDprxAt-py3.9\Scripts\python.exe

deadline = 61751.44110612243

    @contextmanager
    def fail_at(deadline):
        """Creates a cancel scope with the given deadline, and raises an error if it
        is actually cancelled.
    
        This function and :func:`move_on_at` are similar in that both create a
        cancel scope with a given absolute deadline, and if the deadline expires
        then both will cause :exc:`Cancelled` to be raised within the scope. The
        difference is that when the :exc:`Cancelled` exception reaches
        :func:`move_on_at`, it's caught and discarded. When it reaches
        :func:`fail_at`, then it's caught and :exc:`TooSlowError` is raised in its
        place.
    
        Raises:
          TooSlowError: if a :exc:`Cancelled` exception is raised in this scope
            and caught by the context manager.
    
        """
    
        with move_on_at(deadline) as scope:
>           yield scope

C:\Users\runneradmin\AppData\Local\pypoetry\Cache\virtualenvs\parsec-cloud-_mDprxAt-py3.9\lib\site-packages\trio\_timeouts.py:106: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

    @pytest.mark.trio
    async def test_sleep_with_mock():
        tp = TimeProvider()
        assert tp.sleeping_stats() == 0  # Sanity check
    
        t1 = DateTime(2001, 1, 1, 0, 0, 0)
        t2 = DateTime(2002, 1, 2, 0, 0, 0)
    
        async def _async_mock_time(time_provider, freeze, shift):
            # Make sure we are not changing the mock before time provider sleeps
            await trio.sleep(0.01)
            time_provider.mock_time(freeze=freeze, shift=shift)
    
        async with trio.open_nursery() as nursery:
    
            # Test shift mock
            with trio.fail_after(1):
                nursery.start_soon(_async_mock_time, tp, None, 11)
>               await tp.sleep(10)

tests\test_time_provider.py:112: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

abort_func = <builtins.TokioTaskAborterFromTrio object at 0x000001D826D12420>

    async def wait_task_rescheduled(abort_func):
        """Put the current task to sleep, with cancellation support.
    
        This is the lowest-level API for blocking in Trio. Every time a
        :class:`~trio.lowlevel.Task` blocks, it does so by calling this function
        (usually indirectly via some higher-level API).
    
        This is a tricky interface with no guard rails. If you can use
        :class:`ParkingLot` or the built-in I/O wait functions instead, then you
        should.
    
        Generally the way it works is that before calling this function, you make
        arrangements for "someone" to call :func:`reschedule` on the current task
        at some later point.
    
        Then you call :func:`wait_task_rescheduled`, passing in ``abort_func``, an
        "abort callback".
    
        (Terminology: in Trio, "aborting" is the process of attempting to
        interrupt a blocked task to deliver a cancellation.)
    
        There are two possibilities for what happens next:
    
        1. "Someone" calls :func:`reschedule` on the current task, and
           :func:`wait_task_rescheduled` returns or raises whatever value or error
           was passed to :func:`reschedule`.
    
        2. The call's context transitions to a cancelled state (e.g. due to a
           timeout expiring). When this happens, the ``abort_func`` is called. Its
           interface looks like::
    
               def abort_func(raise_cancel):
                   ...
                   return trio.lowlevel.Abort.SUCCEEDED  # or FAILED
    
           It should attempt to clean up any state associated with this call, and
           in particular, arrange that :func:`reschedule` will *not* be called
           later. If (and only if!) it is successful, then it should return
           :data:`Abort.SUCCEEDED`, in which case the task will automatically be
           rescheduled with an appropriate :exc:`~trio.Cancelled` error.
    
           Otherwise, it should return :data:`Abort.FAILED`. This means that the
           task can't be cancelled at this time, and still has to make sure that
           "someone" eventually calls :func:`reschedule`.
    
           At that point there are again two possibilities. You can simply ignore
           the cancellation altogether: wait for the operation to complete and
           then reschedule and continue as normal. (For example, this is what
           :func:`trio.to_thread.run_sync` does if cancellation is disabled.)
           The other possibility is that the ``abort_func`` does succeed in
           cancelling the operation, but for some reason isn't able to report that
           right away. (Example: on Windows, it's possible to request that an
           async ("overlapped") I/O operation be cancelled, but this request is
           *also* asynchronous � you don't find out until later whether the
           operation was actually cancelled or not.)  To report a delayed
           cancellation, then you should reschedule the task yourself, and call
           the ``raise_cancel`` callback passed to ``abort_func`` to raise a
           :exc:`~trio.Cancelled` (or possibly :exc:`KeyboardInterrupt`) exception
           into this task. Either of the approaches sketched below can work::
    
              # Option 1:
              # Catch the exception from raise_cancel and inject it into the task.
              # (This is what Trio does automatically for you if you return
              # Abort.SUCCEEDED.)
              trio.lowlevel.reschedule(task, outcome.capture(raise_cancel))
    
              # Option 2:
              # wait to be woken by "someone", and then decide whether to raise
              # the error from inside the task.
              outer_raise_cancel = None
              def abort(inner_raise_cancel):
                  nonlocal outer_raise_cancel
                  outer_raise_cancel = inner_raise_cancel
                  TRY_TO_CANCEL_OPERATION()
                  return trio.lowlevel.Abort.FAILED
              await wait_task_rescheduled(abort)
              if OPERATION_WAS_SUCCESSFULLY_CANCELLED:
                  # raises the error
                  outer_raise_cancel()
    
           In any case it's guaranteed that we only call the ``abort_func`` at most
           once per call to :func:`wait_task_rescheduled`.
    
        Sometimes, it's useful to be able to share some mutable sleep-related data
        between the sleeping task, the abort function, and the waking task. You
        can use the sleeping task's :data:`~Task.custom_sleep_data` attribute to
        store this data, and Trio won't touch it, except to make sure that it gets
        cleared when the task is rescheduled.
    
        .. warning::
    
           If your ``abort_func`` raises an error, or returns any value other than
           :data:`Abort.SUCCEEDED` or :data:`Abort.FAILED`, then Trio will crash
           violently. Be careful! Similarly, it is entirely possible to deadlock a
           Trio program by failing to reschedule a blocked task, or cause havoc by
           calling :func:`reschedule` too many times. Remember what we said up
           above about how you should use a higher-level API if at all possible?
    
        """
>       return (await _async_yield(WaitTaskRescheduled(abort_func))).unwrap()

C:\Users\runneradmin\AppData\Local\pypoetry\Cache\virtualenvs\parsec-cloud-_mDprxAt-py3.9\lib\site-packages\trio\_core\_traps.py:166: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

    def unwrap(self):
        self._set_unwrapped()
        # Tracebacks show the 'raise' line below out of context, so let's give
        # this variable a name that makes sense out of context.
        captured_error = self.error
        try:
>           raise captured_error

C:\Users\runneradmin\AppData\Local\pypoetry\Cache\virtualenvs\parsec-cloud-_mDprxAt-py3.9\lib\site-packages\outcome\_impl.py:138: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

    def raise_cancel():
>       raise Cancelled._create()
E       trio.Cancelled: Cancelled

C:\Users\runneradmin\AppData\Local\pypoetry\Cache\virtualenvs\parsec-cloud-_mDprxAt-py3.9\lib\site-packages\trio\_core\_run.py:1173: Cancelled

During handling of the above exception, another exception occurred:

value = <trio.Nursery object at 0x000001D826D12E50>

    async def yield_(value=None):
>       return await _yield_(value)

C:\Users\runneradmin\AppData\Local\pypoetry\Cache\virtualenvs\parsec-cloud-_mDprxAt-py3.9\lib\site-packages\async_generator\_impl.py:106: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\Users\runneradmin\AppData\Local\pypoetry\Cache\virtualenvs\parsec-cloud-_mDprxAt-py3.9\lib\site-packages\async_generator\_impl.py:99: in _yield_
    return (yield _wrap(value))
tests\test_time_provider.py:113: in test_sleep_with_mock
    await wait_for_sleeping_stat(tp, 0)
C:\hostedtoolcache\windows\Python\3.9.13\x64\lib\contextlib.py:137: in __exit__
    self.gen.throw(typ, value, traceback)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

deadline = 61751.44110612243

    @contextmanager
    def fail_at(deadline):
        """Creates a cancel scope with the given deadline, and raises an error if it
        is actually cancelled.
    
        This function and :func:`move_on_at` are similar in that both create a
        cancel scope with a given absolute deadline, and if the deadline expires
        then both will cause :exc:`Cancelled` to be raised within the scope. The
        difference is that when the :exc:`Cancelled` exception reaches
        :func:`move_on_at`, it's caught and discarded. When it reaches
        :func:`fail_at`, then it's caught and :exc:`TooSlowError` is raised in its
        place.
    
        Raises:
          TooSlowError: if a :exc:`Cancelled` exception is raised in this scope
            and caught by the context manager.
    
        """
    
        with move_on_at(deadline) as scope:
            yield scope
        if scope.cancelled_caught:
>           raise TooSlowError
E           trio.TooSlowError

C:\Users\runneradmin\AppData\Local\pypoetry\Cache\virtualenvs\parsec-cloud-_mDprxAt-py3.9\lib\site-packages\trio\_timeouts.py:108: TooSlowError
============================ slowest 10 durations =============================
1.01s call     tests/test_time_provider.py::test_sleep_with_mock
0.77s call     tests/test_cli.py::test_pki_enrollment[mock_parsec_ext]
0.15s call     tests/backend/test_administration_rest_api.py::test_organization_update_not_found[unknown]
0.09s call     tests/test_time_provider.py::test_sleep_in_nursery[raw]
0.08s call     tests/backend/test_access.py::test_authenticated_has_limited_access
0.06s call     tests/backend/test_events.py::test_event_resubscribe
0.05s call     tests/backend/test_events.py::test_events_subscribe
0.04s call     tests/test_cli.py::test_share_workspace[NONE]
0.04s call     tests/backend/test_administration_rest_api.py::test_organization_update_ok[True]
0.04s teardown tests/api/test_handshake.py::test_build_result_req_bad_key
=========================== short test summary info ===========================
FAILED tests/test_time_provider.py::test_sleep_with_mock - trio.TooSlowError
!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!! xdist.dsession.Interrupted: stopping after 1 failures !!!!!!!!!!!!
================= 1 failed, 246 passed, 15 skipped in 17.93s ==================

https://github.com/Scille/parsec-cloud/actions/runs/3240377967/jobs/5310926258

FirelightFlagboy avatar Oct 13 '22 07:10 FirelightFlagboy