nb_conda_kernels
nb_conda_kernels copied to clipboard
Hot kernel discovery
Is that technically possible to detect newly created Conda kernels and load it in an already running notebook?
A kind of timer that would look for new kernels periodically.
ping @mcg1969
I must be honest, I don't know if it's possible or not. I do know that the conda info
information is cached every CACHE_TIMEOUT
seconds but I can't recall the mechanism by which a refresh would be triggered again. I would welcome further input on this issue from anyone.
Thanks for the answer.
Maybe a conda expert such as @msarahan could help us?
First - I've just started looking at nb_conda_kernels, so I'm no expert. I am quite familiar though with kernel management (and kernelspecs) in general and was looking at this repo (and contacting @mcg1969) regarding the kernel provider work that is coming along.
Generally speaking new kernelspecs are immediately available. The NB front-ends ask for the kernelspecs fairly regularly and adding kernelspecs into the appropriate locations are picked up in the "next call". There's been discussion of adding cache in the libraries (currently jupyter_client) or applications, but that hasn't happened yet.
As Michael mentions the conda kernelspecs are essentially cached where the cache timeout is 60 seconds. There are two aspects that are cached, the set of conda envs and the set of kernelspecs (across those envs), each of which have separately managed expirations. I'm not familiar with conda-kernels to know if new kernelspecs can be added into an env or if each env supports one kernelspec per language (python and R), etc.
Aside from the last point, I would expect that you'd see both kernels relative to newly added conda envs and new kernelspecs added to existing envs after, worst case, 2 minutes. The reason I say two minutes is that there may be some cases where the kernelspec cache expires before the env cache expiration, so you might not see kernels in new envs until two kernelspec cache expirations - thus 2 minutes.
I hope that helps (and is correct :smile:).
This feature would be useful when Jupyter is running on a remote machine because we don't have to restart Jupyter after creating a new env.
Thanks @kevin-bates. So the information is available from what I understand. Maybe a callback could be added on the Jupyter side when a new kernel is detected. So nb_conda_kernels
could register to it and trigger an update whenever a new conda
env is created?
I think the Jupyter side is fine since it hits the /api/kernelspecs whenever it needs to get the set of kernels (refresh). I think lab may even use a polling sequence for this (but I'm not very familiar with the front ends).
I think the bigger issue is bypassing the cache expiry in nb-conda-kernels in such cases.
Hey all FYI in our use of conda, we build environment dynamically through jupyter_conda
. And I can confirm that they are automatically discovered by jupyter. But sometimes you indeed need the cache to timeout for the kernel to appear.
Note of course for it to work you need to install ipykernel in all environments.
!!! How did I miss that? Thanks @fcollonval. I'll give it a try soon.
I am finding that nb_conda_kernels
can indeed discover kernels if 1) you wait longer than 60 seconds from the environment creation and 2) you refresh the browser. I'm working a little more to see if there are other conditions
With JupyterLab, you don't have to refresh the browser—it will pick up new kernels in a minute or two, because JLab will periodically (about once every two minutes) query the /api/kernelspecs
endpoint which triggers a fresh read of the data.
Closing because my sense is the modern stack is handling this for us now.