nb_conda_kernels
nb_conda_kernels copied to clipboard
update conda info cache in the background
this is my proposed fix for https://github.com/Anaconda-Platform/nb_conda_kernels/issues/210
This looks great. Thanks @rvalieris
Small suggestion, it could be nice to request a join
on the current thread when the kernel spec manager is destroyed to handle properly the thread lifecycle.
it could be nice to request a join on the current thread when the kernel spec manager is destroyed
yes thats a good point, so calling .join in __del__
I guess , or is there some other mechanism ?
calling .join in
__del__
That should do indeed.
Pinging @mcg1969 for review
I'm running CI now. Can I ask how this affects compatibility? Are we fine on Windows still, for instance?
CI failed on this import:
tests/test_runner.py:14: in <module>
from jupyter_client.blocking.client import Empty
E ImportError: cannot import name 'Empty' from 'jupyter_client.blocking.client' ($PREFIX/lib/python3.8/site-packages/jupyter_client/blocking/client.py)
jupyter_client.blocking.client
doesnt have Empty
anymore, however this import is not being used on test_runner so I guess we can just remove this line ?
Can I ask how this affects compatibility? Are we fine on Windows still, for instance?
I haven't tested on windows. however all this patch is doing is just wrapping the subprocess call on a thread (so that we can wait for the output without blocking), I don't think thats going to cause any problems, but if anybody can do the test that would be better.
also, I just saw that jupyter_client 7.0 have a new kernel provisioner stuff, I have not been able to wrap my head around how all this works yet but I wonder if we can do this better using that.
I agree that we can remove the import.
I do feel like we need to verify this on windows before we can merge, though. Manual verification is fine.
alright, I did a simple test, disclaimer tho: I never used conda on windows.
heres what I did:
- installed miniconda3 on a win10 vm
- created a jupyter env with jupyterlab and nb_conda_kernels
- verified it works ok
- installed the nb_conda_kernels with the patch and restarted jupyterlab
- created a new env after jupyterlab was started
- verified the new env appears on the launcher after some time.
@mcg1969, could you take another look at this?
Several team members have reported that conda info --json
takes ≈25 seconds because this package blocks things.
We just ran into this again, and it took a while to dive in and figure out that this is where the problem was: https://github.com/2i2c-org/infrastructure/issues/2047. This blocks server startup long enough that server spawns fail...
While I believe this is still necessary, it turns out this wasn't entirely what my problem was - https://github.com/2i2c-org/infrastructure/issues/2047#issuecomment-1379785520 lists the other primary issue. Apologies for the wrong path taken earlier.
@basnijholt now that we're unstuck we can reconsider this one we would need to resolve the merge conflicts first
@rvalieris @fcollonval @yuvipanda @basnijholt could I perhaps get some eyes on this? I needed to carefully merge this with #233 / #231 and I would love a second pair of eyes to make sure I've done it correctly.
Hello, its not clear to me how to trigger these zombie processes, but it seems ok to me to execute the waits inside the thread, I would only add a guard to make sure the child process is a zombie before waiting on it, to avoid getting stuck waiting indefinitely:
p = psutil.Process()
for c in p.children():
if c.status() != psutil.STATUS_ZOMBIE:
next
try:
c.wait(timeout=0)
except psutil.TimeoutExpired:
pass
Uh, nevermind I misread the docs, timeout=0 will not block.
I was worried in cases where there are other processes around, like spawned by jupyter-server-proxy, but I think its fine.
Thank you for checking! We haven't tagged a release yet so even if we needed to fix something we could.