jupyter_client
jupyter_client copied to clipboard
Clean up synchronous managers
Continued discussion from #751.
We also can't rely asyncio_run in the future, since it relies on the deprecated global get_event_loop and set_event_loop.
Options for moving forward:
- Use a single private ioloop instance per manager
- The base classes will be asynchronous and call the the private async versions of functions directly as needed. We will only use futures and tasks from within async methods.
- The synchronous classes will work by wrapping the asynchronous public methods using a decorator that runs a new private asyncio loop as return asyncio.new_event_loop().run_until_complete(async_method).
- We would offer a function that wraps the async class to become synchronous, and use this ourselves
- We would have a section like:
# These methods are meant to be overridden by subclasses
async def _start_kernel(...)
- Run 'asyncio-for-sync' in a dedicated IO thread.
call_soon_threadsafeandconcurrent.futures.Futuremake this pretty doable, and in my experience simpler and more robust than using nest_asyncio. ipyparallel's sync client has worked that way for a long time. (per @minrk)
- The synchronous classes will work by wrapping the asynchronous public methods using a decorator that runs a new private asyncio loop as return asyncio.new_event_loop().run_until_complete(async_method).
We may already be running inside an event loop, so this might not be possible, e.g.:
import asyncio
loop2 = asyncio.new_event_loop()
async def main2():
await asyncio.sleep(1)
print("main2 done")
async def main1():
loop2.run_until_complete(main2())
print("main1 done")
asyncio.run(main1())
Traceback (most recent call last):
File "/home/david/github/davidbrochart/jupyterlab/foo.py", line 13, in <module>
asyncio.run(main1())
File "/home/david/mambaforge/envs/jupyterlab-dev/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/home/david/mambaforge/envs/jupyterlab-dev/lib/python3.10/asyncio/base_events.py", line 641, in run_until_complete
return future.result()
File "/home/david/github/davidbrochart/jupyterlab/foo.py", line 10, in main1
loop2.run_until_complete(main2())
File "/home/david/mambaforge/envs/jupyterlab-dev/lib/python3.10/asyncio/base_events.py", line 617, in run_until_complete
self._check_running()
File "/home/david/mambaforge/envs/jupyterlab-dev/lib/python3.10/asyncio/base_events.py", line 579, in _check_running
raise RuntimeError(
RuntimeError: Cannot run the event loop while another loop is running
sys:1: RuntimeWarning: coroutine 'main2' was never awaited
- Run 'asyncio-for-sync' in a dedicated IO thread.
call_soon_threadsafeandconcurrent.futures.Futuremake this pretty doable, and in my experience simpler and more robust than using nest_asyncio. ipyparallel's sync client has worked that way for a long time. (per @minrk)
Since @minrk has experience using this approach and it has been working for a long time in ipyparallel, I think it is a better solution.
I am actually working on something similar for work, which we should be able to use here. Here is a prototype of wrapping an asynchronous class and using a background thread to run.
You'd still need to have the public methods call out to private async methods to make sure you don't end up crossing boundaries between sync and async.
Nice! It looks like it's creating a new thread for each coroutine though. It would be nice to reuse threads from a pool, for performance reasons. (Maybe you had that in mind already).
There is only one thread and one io loop created, since it uses a singleton class
Ah my bad :+1:
There is a similar functionality in django/asgiref called AsyncToSync, pointed out on discuss.python.org