ipykernel disallows jupyter user to create/attach to event loop
Hello,
as follow-up on https://github.com/jupyter/notebook/issues/3397#issuecomment-419474214, here a minimal example that I expect to finish successfully when sent to the kernel by jupyter:
import asyncio
print(asyncio.get_event_loop().run_until_complete(asyncio.sleep(0)))
It runs fine in latest IPython which doesn't use tornado under the hood.
However, notebook>=5.7.9 requires tornado>=5 and so when ipykernel executes loop = tornado.ioloop.IOLoop.current(), an asyncio event loop will be created and run_forever by ipykernel.
This consequently leaves the user without options (left aside nest-asyncio which works for most cases), as there is a running asyncio loop on startup.
If IPython can run this simple snippet, shouldn't jupyter be able to do the same? Currently, any package that does a plain loop setup and teardown will crash in jupyter envs that have tornado>=5.
@ddelange Can you explain why you don't just call await asyncio.sleep(0)?
Your statement is correct, but also note that "await asyncio.sleep(0)" does not work with latest default IPython. This isn't really a bug, but rather a design choice. So, you could file a bug over there.
In reality, I think the complaint here is more with the asyncio folks who have created a functional dichotomy in that sync functions can not call async functions except via a top level asyncio.run()
Can you explain why you don't just call await asyncio.sleep(0)
The main reason why I think a cell should allow sync functions to attach/create a loop under the hood is for async libraries that provide sync wrappers somehow using new_event_loop, get_event_loop, get_running_loop or run.
In order for those sync wrappers to work in a notebook, the developers of those libraries would have to jump through hoops to work around this issue. For instance, s3fs uses fsspec.asyn to start new loops in separate threads all the time.
But mostly developers will expect their code (working in pythyon/IPython) to run in notebooks as well. As a result, there are now numerous projects that require 'tornado<5' 'notebook<5.7.9' or some other hacky solution.
- https://github.com/microsoft/playwright-python/issues/178#issuecomment-704143667
- https://github.com/tensorflow/federated/issues/869#issuecomment-643578291
- 40+ more libraries mentioning this issue: https://github.com/jupyter/notebook/issues/3397
As this issue does not occur in Python/IPython REPL but only in notebooks (as mentioned in various linked issues), I think there should be a solution where ipykernel does not reserve the sole right to create an event loop on the main thread, like the suggestions from @minrk.
Using nest_asyncio, or forcing every library that wraps async code to refactor purely for notebook compatibility, is not a feasible solution in my opinion.
So, you could file a bug over there.
Working for me on latest IPython:

I think this statement conflicts with the design goals of asyncio:
" forcing every library that wraps async code to refactor purely for notebook compatibility is not a feasible solution in my opinion."
Forcing the refactoring of code for compatibility is actually the intention of the asyncio designers. They discuss this in numerous different places. They believe that all async code need's to have the ability to execute in the context of async. They have specifically said that there should only be one global event loop per thread.
Breaking this contract seems like a great way to generate a lot of confusion in the ecosystem and is no better than monkey patching.
Try this:
Python 3.8.5 (default, Jul 28 2020, 12:59:40)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.18.1 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import asyncio
In [2]: asyncio.run(asyncio.sleep(1))
In [3]: await asyncio.sleep(1)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
~/.local/lib/python3.8/site-packages/IPython/core/async_helpers.py in __call__(self, coro)
26 import asyncio
27
---> 28 return asyncio.get_event_loop().run_until_complete(coro)
29
30 def __str__(self):
/usr/lib/python3.8/asyncio/events.py in get_event_loop(self)
637
638 if self._local._loop is None:
--> 639 raise RuntimeError('There is no current event loop in thread %r.'
640 % threading.current_thread().name)
641
RuntimeError: There is no current event loop in thread 'MainThread'.
In [4]: print(asyncio.get_event_loop().run_until_complete(asyncio.sleep(0)))
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-4-f1d2d15d8670> in <module>
----> 1 print(asyncio.get_event_loop().run_until_complete(asyncio.sleep(0)))
/usr/lib/python3.8/asyncio/events.py in get_event_loop(self)
637
638 if self._local._loop is None:
--> 639 raise RuntimeError('There is no current event loop in thread %r.'
640 % threading.current_thread().name)
641
RuntimeError: There is no current event loop in thread 'MainThread'.
In [5]:
Note that the asyncio docs specifically discourages it's users from calling get_event_loop.
"Because this function has rather complex behavior (especially when custom event loop policies are in use), using the get_running_loop() function is preferred to get_event_loop() in coroutines and callbacks."
https://docs.python.org/3/library/asyncio-eventloop.html
Making an API public and then discouraging it's use is rather unnerving, but that is what we have.
My general thought is a lot of grief is occurring and folks are laying concerns at the feet of application owners rather than the language designers themselves. Trying to implement around their contracts is just going to cause bugs and incompatibility and lack of backwards compatiblity.
The language designers rejected addressing this here - https://bugs.python.org/issue33523
We can point fingers at individual libraries, but unless there is way of consistent finger pointing (ie, a contract) this is going to lead to a global mess.
The current proposed solutions are changing asyncio libraries to work with ipykernel or modifying python. Is there a way to change ipykernel instead?
Hi all, I'm following up on this. nest_asyncio was deprecated/archived earlier this month. We have also seen real issues with that workaround when the asyncio tasks launched are I/O intensive, it causes the entire Jupyter server to freeze/stutter . I'm wondering if we can go back to the table and consider @minrk 's potential option of running running ipykernel in a background thread.
A somewhat related issue is that nest_asyncio has gotten out of date with Python 3.12+ which has added an optional loop_factory parameter, however the patched version of asyncio.run from nest_asyncio's patched run method doesn't pass-through this keyword argument resulting in errors like:
> asyncio_run(amain(), loop_factory=get_loop_factory())
E TypeError: _patch_asyncio.<locals>.run() got an unexpected keyword argument 'loop_factory'