David Brochart
David Brochart
Nothing prevents the user from crashing the kernel, the user can run arbitrary code.
Threads are quite lightweight, you can easily start thousands of them, so we're far from reaching the limit with our use case. The user won't have to do any resource...
> there is resource limitation, and the client has to be able to resend tasks because another process may have "stolen" the available threads There won't be any stolen threads,...
Actually it looks more like an ipykernel issue, since akernel stops immediately after interruption. I tried xeus-python (not raw xeus-python since it doesn't allow awaiting at the top-level), but interrupting...
Thanks @glentakahashi, I can reproduce too. I'm wondering what kind of implications there would be e.g. in the notebook if we allowed atomic multiple-cell execution, especially with regards to widgets.
Thanks for the feedback @JohanMabille, this is interesting. > When the client is closed, the kernel close the channel. I guess we would need a new message on the `control`...
As you said, it only makes sense in specific situations, the most obvious one being when the kernel is used by a single notebook. But in this case, it looks...
It's not possible like that, because both kernel managers don't share their state. You can see it if you list the kernels in the second process: ```py print(mkm.list_kernel_ids()) # shows...
You should get the connection file from the process that started the kernel: ```py from jupyter_client import MultiKernelManager mkm = MultiKernelManager() mkm.start_kernel(kernel_name="python3", kernel_id="some-kernel-id") km = mkm.get_kernel("some-kernel-id") km.connection_file # 'kernel-some-kernel-id.json' ```...
In the second process you need either a blocking or an asynchronous kernel client. I don't think you need a kernel manager in the second process as the kernel is...