ipykernel
ipykernel copied to clipboard
AttributeError: 'NoneType' object has no attribute 'thread'
After running a jupyter notebook using jupyter nbconvert --to notebook --execute <notebook_name> --ExecutePreprocessor.timeout=15000 --inplace, I get the following error:
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/lib/python3.5/logging/__init__.py", line 1881, in shutdown
h.flush()
File "/home/kbuilder/.local/lib/python3.5/site-packages/absl/logging/__init__.py", line 882, in flush
self._current_handler.flush()
File "/home/kbuilder/.local/lib/python3.5/site-packages/absl/logging/__init__.py", line 776, in flush
self.stream.flush()
File "/tmpfs/src/tf_docs_env/lib/python3.5/site-packages/ipykernel/iostream.py", line 341, in flush
if self.pub_thread.thread.is_alive():
AttributeError: 'NoneType' object has no attribute 'thread'
This is the line where it fails: https://github.com/ipython/ipykernel/blob/master/ipykernel/iostream.py#L341
What can I do to fix it?
Jupyter version: jupyter-core-4.5.0 mistune-0.8.4 nbconvert-5.5.0 nbformat-4.4.0 notebook-6.0.0
Same here, executing:
jupyter nbconvert \
--to notebook \
--ExecutePreprocessor.kernel_name=python3 \
--ExecutePreprocessor.timeout=600 \
--execute ${IPYNB_OUTPUT} \
--output ${IPYNB_OUTPUT} \
--allow-errors
Output:
[NbConvertApp] Executing notebook with kernel: python3
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/home/xxxx/anaconda3/envs/zzzz/lib/python3.7/logging/__init__.py", line 2038, in shutdown
h.flush()
File "/home/xxxx/anaconda3/envs/zzzz/lib/python3.7/logging/__init__.py", line 1018, in flush
self.stream.flush()
File "/home/xxxx/anaconda3/envs/zzzz/lib/python3.7/site-packages/ipykernel/iostream.py", line 341, in flush
if self.pub_thread.thread.is_alive():
AttributeError: 'NoneType' object has no attribute 'thread'
Note that my notebook has been successfully converted despite the error message at the end.
Versions:
Versions:
--------------------------------
python 3.7.3
ipykernel 5.1.2
ipython 7.1.1
jupyter 1.0.0
jupyter_client 5.3.1
jupyter_console 6.0.0
jupyter_core 4.4.0
nbconvert 5.6.0
notebook 6.0.0
same error when running kernel on kaggle
Time
#
Log Message
1.53
1
[NbConvertApp] Converting notebook __notebook__.ipynb to notebook
3.62
2
[NbConvertApp] Executing notebook with kernel: python3
4.01
3
2019-08-23 21:02:23.719627: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
6.54
4
2019-08-23 21:02:26.249148: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2200000000 Hz
6.54
5
2019-08-23 21:02:26.249541: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55c9e57fd190 executing computations on platform Host. Devices: 2019-08-23 21:02:26.249701: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): <undefined>, <undefined>
6.55
6
2019-08-23 21:02:26.256911: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1
6.67
7
2019-08-23 21:02:26.381657: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
6.68
8
2019-08-23 21:02:26.384756: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55c9e58b94e0 executing computations on platform CUDA. Devices: 2019-08-23 21:02:26.384791: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Tesla P100-PCIE-16GB, Compute Capability 6.0 2019-08-23 21:02:26.385175: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
6.68
9
2019-08-23 21:02:26.385969: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties: name: Tesla P100-PCIE-16GB major: 6 minor: 0 memoryClockRate(GHz): 1.3285 pciBusID: 0000:00:04.0 2019-08-23 21:02:26.386051: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
6.68
10
2019-08-23 21:02:26.390300: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10.0
6.69
11
2019-08-23 21:02:26.392756: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10.0 2019-08-23 21:02:26.393349: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10.0
6.69
12
2019-08-23 21:02:26.396490: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10.0 2019-08-23 21:02:26.398342: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10.0
6.70
13
2019-08-23 21:02:26.404279: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7 2019-08-23 21:02:26.404435: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
6.70
14
2019-08-23 21:02:26.405313: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-08-23 21:02:26.406100: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0 2019-08-23 21:02:26.406173: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
7.23
15
2019-08-23 21:02:26.941824: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-08-23 21:02:26.941888: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0 2019-08-23 21:02:26.941901: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N 2019-08-23 21:02:26.942225: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
7.24
16
2019-08-23 21:02:26.943237: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-08-23 21:02:26.944098: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9768 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:04.0, compute capability: 6.0)
10.06
17
Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/opt/conda/lib/python3.6/logging/__init__.py", line 1944, in shutdown h.flush() File "/opt/conda/lib/python3.6/site-packages/absl/logging/__init__.py", line 882, in flush self._current_handler.flush() File "/opt/conda/lib/python3.6/site-packages/absl/logging/__init__.py", line 776, in flush self.stream.flush() File "/opt/conda/lib/python3.6/site-packages/ipykernel/iostream.py", line 341, in flush if self.pub_thread.thread.is_alive(): AttributeError: 'NoneType' object has no attribute 'thread'
10.69
18
[NbConvertApp] Writing 24406 bytes to __notebook__.ipynb
11.56
19
[NbConvertApp] Converting notebook __notebook__.ipynb to html
11.97
20
[NbConvertApp] Writing 325062 bytes to __results__.html `
Kaggle error ++
Logs
Time # Log Message
1.70 1 [NbConvertApp] Converting notebook __notebook__.ipynb to notebook
3.79 2 [NbConvertApp] Executing notebook with kernel: python3
4.99 3 2019-08-27 03:15:56.608591: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
45.40 4 2019-08-27 03:16:37.022190: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2200000000 Hz
45.41 5 2019-08-27 03:16:37.025627: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55c35f313dc0 executing computations on platform Host. Devices: 2019-08-27 03:16:37.025703: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): <undefined>, <undefined> 2019-08-27 03:16:37.028552: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1
45.52 6 2019-08-27 03:16:37.139609: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
45.53 7 2019-08-27 03:16:37.140625: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55c35f3cfe50 executing computations on platform CUDA. Devices: 2019-08-27 03:16:37.140656: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Tesla P100-PCIE-16GB, Compute Capability 6.0 2019-08-27 03:16:37.141051: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-08-27 03:16:37.141937: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties: name: Tesla P100-PCIE-16GB major: 6 minor: 0 memoryClockRate(GHz): 1.3285 pciBusID: 0000:00:04.0 2019-08-27 03:16:37.142045: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0 2019-08-27 03:16:37.143858: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10.0 2019-08-27 03:16:37.145679: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10.0 2019-08-27 03:16:37.146132: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10.0 2019-08-27 03:16:37.148064: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10.0
45.53 8 2019-08-27 03:16:37.149869: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10.0
45.53 9 2019-08-27 03:16:37.157249: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
45.54 10 2019-08-27 03:16:37.157496: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-08-27 03:16:37.158599: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-08-27 03:16:37.159549: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0 2019-08-27 03:16:37.159643: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
46.00 11 2019-08-27 03:16:37.620453: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-08-27 03:16:37.620523: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0 2019-08-27 03:16:37.620537: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
46.00 12 2019-08-27 03:16:37.621046: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-08-27 03:16:37.622412: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2019-08-27 03:16:37.623581: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 15146 MB memory) -> physical GPU (device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:04.0, compute capability: 6.0)
49.87 13 2019-08-27 03:16:41.486388: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10.0
241.52 14 Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/opt/conda/lib/python3.6/logging/__init__.py", line 1944, in shutdown h.flush() File "/opt/conda/lib/python3.6/site-packages/absl/logging/__init__.py", line 882, in flush self._current_handler.flush() File "/opt/conda/lib/python3.6/site-packages/absl/logging/__init__.py", line 776, in flush self.stream.flush() File "/opt/conda/lib/python3.6/site-packages/ipykernel/iostream.py", line 341, in flush if self.pub_thread.thread.is_alive(): AttributeError: 'NoneType' object has no attribute 'thread'
243.20 15 [NbConvertApp] Writing 34252 bytes to __notebook__.ipynb
244.06 16 [NbConvertApp] Converting notebook __notebook__.ipynb to html
244.57 17 [NbConvertApp] Writing 319262 bytes to __results__.html
kaggle notebook error ...
Hi, I used to have this issue as well:
jupyter 1.0.0
jupyter-client 5.3.1
jupyter-console 6.0.0
jupyter-core 4.5.0
jupyterlab 1.1.1
jupyterlab-server 1.0.6
ipykernel 5.1.1
ipython 7.8.0
ipython-genutils 0.2.0
ipywidgets 7.5.1
Downgrading ipykernel to 5.1.1 fixed it.
Same issue here. It used to work a few days ago before I ran a conda update --all
I execute notebooks from my own python script, not with nbconvert but it's the same library.
jupyter_client 5.3.1 py_0 conda-forge
jupyter_core 4.4.0 py_0 conda-forge
ipykernel 5.1.2 py36h5ca1d4c_0 conda-forge
ipython 7.8.0 py36h5ca1d4c_0 conda-forge
ipython_genutils 0.2.0 py_1 conda-forge
ipywidgets 7.5.1 py_0 conda-forge
Downgrading to 5.1.1 works for me too.
it is sad because the competition i join is required to submit an offline kernel ,so i cannot connect the internet to downgrad my ipykernel
Sad...when will this be fixed?
Bumping here as I have the same issue.
$ conda list |grep jupyter
jupyter-archive 0.5.5 py_0 conda-forge
jupyter_client 5.3.3 py37_1 conda-forge
jupyter_conda 3.1.1 py_1 conda-forge
jupyter_core 4.6.1 py37_0 conda-forge
jupyterlab 1.2.3 py_0 conda-forge
jupyterlab_server 1.0.6 py_0 conda-forge
$ conda list |grep ipy
ipykernel 5.1.3 py37h5ca1d4c_0 conda-forge
ipython 7.9.0 py37h5ca1d4c_1 conda-forge
ipython_genutils 0.2.0 py_1 conda-forge
ipywidgets 7.5.1 py_0 conda-forge
Pinging @blink1073 as I am not sure any of the core devs saw that issue.
Downgrading ipykernel to 5.1.1 fixed it.
Thanks man, you're hero.
I'm quite confident that this issue is closed by https://github.com/ipython/ipykernel/pull/463, which is part of 5.1.4.
I confirm, no issues with
$ conda list | grep jupyter
jupyter 1.0.0 py_2 conda-forge
jupyter_client 5.3.4 py37_1 conda-forge
jupyter_console 6.1.0 py_0
jupyter_core 4.6.1 py37_0 conda-forge
$ conda list | grep ipy
ipykernel 5.1.4 py37h5ca1d4c_0 conda-forge
ipython 7.12.0 py37h5ca1d4c_0 conda-forge
ipython_genutils 0.2.0 py_1 conda-forge
ipywidgets 7.5.1 py_0 conda-forge
scipy 1.4.1 py37h921218d_0 conda-forge
I haven't seen that issue in a while.
I still saw it a few days ago before I upgraded to the latest version. Now I didn't see it anymore. So it seems indeed that it is fixed in the latest version :+1:
/celebrate-and-close :)
I have a similar issue when running,
jupyter nbconvert --to html --execute notebooks.ipynb
on the following notebook:
from tqdm.notebook import tqdm
from time import sleep
pbar = tqdm(total=10)
for i in range(5):
sleep(1)
pbar.update(1)
pbar.close()
Running the notebook from jupyter notebook does not produce an error. However, with nbconvert I get,
[NbConvertApp] Converting notebook notebooks.ipynb to html
[NbConvertApp] Executing notebook with kernel: python3
Exception ignored in: <function tqdm.__del__ at 0x7f9156085af0>
Traceback (most recent call last):
File "/Users/user1/anaconda3/envs/book/lib/python3.8/site-packages/tqdm/std.py", line 1124, in __del__
self.close()
File "/Users/user1/anaconda3/envs/book/lib/python3.8/site-packages/tqdm/notebook.py", line 271, in close
self.sp(bar_style='danger')
File "/Users/user1/anaconda3/envs/book/lib/python3.8/site-packages/tqdm/notebook.py", line 170, in display
rtext.value = right
File "/Users/user1/anaconda3/envs/book/lib/python3.8/site-packages/traitlets/traitlets.py", line 585, in __set__
self.set(obj, value)
File "/Users/user1/anaconda3/envs/book/lib/python3.8/site-packages/traitlets/traitlets.py", line 574, in set
obj._notify_trait(self.name, old_value, new_value)
File "/Users/user1/anaconda3/envs/book/lib/python3.8/site-packages/traitlets/traitlets.py", line 1134, in _notify_trait
self.notify_change(Bunch(
File "/Users/user1/anaconda3/envs/book/lib/python3.8/site-packages/ipywidgets/widgets/widget.py", line 605, in notify_change
self.send_state(key=name)
File "/Users/user1/anaconda3/envs/book/lib/python3.8/site-packages/ipywidgets/widgets/widget.py", line 489, in send_state
self._send(msg, buffers=buffers)
File "/Users/user1/anaconda3/envs/book/lib/python3.8/site-packages/ipywidgets/widgets/widget.py", line 737, in _send
self.comm.send(data=msg, buffers=buffers)
File "/Users/user1/anaconda3/envs/book/lib/python3.8/site-packages/ipykernel/comm/comm.py", line 122, in send
self._publish_msg('comm_msg',
File "/Users/user1/anaconda3/envs/book/lib/python3.8/site-packages/ipykernel/comm/comm.py", line 66, in _publish_msg
self.kernel.session.send(self.kernel.iopub_socket, msg_type,
File "/Users/user1/anaconda3/envs/book/lib/python3.8/site-packages/jupyter_client/session.py", line 758, in send
stream.send_multipart(to_send, copy=copy)
File "/Users/user1/anaconda3/envs/book/lib/python3.8/site-packages/ipykernel/iostream.py", line 264, in send_multipart
return self.io_thread.send_multipart(*args, **kwargs)
File "/Users/user1/anaconda3/envs/book/lib/python3.8/site-packages/ipykernel/iostream.py", line 214, in send_multipart
self.schedule(lambda : self._really_send(*args, **kwargs))
File "/Users/user1/anaconda3/envs/book/lib/python3.8/site-packages/ipykernel/iostream.py", line 207, in schedule
f()
File "/Users/user1/anaconda3/envs/book/lib/python3.8/site-packages/ipykernel/iostream.py", line 214, in <lambda>
self.schedule(lambda : self._really_send(*args, **kwargs))
File "/Users/user1/anaconda3/envs/book/lib/python3.8/site-packages/ipykernel/iostream.py", line 222, in _really_send
self.socket.send_multipart(msg, *args, **kwargs)
AttributeError: 'NoneType' object has no attribute 'send_multipart'
[NbConvertApp] Writing 283717 bytes to notebooks.html
jupyter 1.0.0 py38_7
jupyter_client 6.1.6 py_0
jupyter_console 6.2.0 py_0
jupyter_core 4.6.3 py38_0
ipykernel 5.3.4 py38h5ca1d4c_0
ipython 7.18.1 py38h5ca1d4c_0
ipython_genutils 0.2.0 py38_0
ipywidgets 7.5.1 py_1
scipy 1.5.0 py38hbab996c_0
I have initially raised the issue here: https://github.com/tqdm/tqdm/issues/1092
I confirm also seeing the problem with ipykernel 5.3.4. Seemingly it returned :(
Experiencing this with ipykernel 5.4.3
Also faced this issue here
@bknaepen similar things happen on my side with a fresh install. The kernel restarts automatically and
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File ".../lib/python3.7/logging/__init__.py", line 2036, in shutdown
h.flush()
File ".../lib/python3.7/site-packages/absl/logging/__init__.py", line 866, in flush
self.stream.flush()
File "..../lib/python3.7/site-packages/ipykernel/iostream.py", line 357, in flush
self._flush()
File ".../lib/python3.7/site-packages/ipykernel/iostream.py", line 384, in _flush
parent=self.parent_header, ident=self.topic)
File ".../lib/python3.7/site-packages/jupyter_client/session.py", line 751, in send
stream.send_multipart(to_send, copy=copy)
File ".../lib/python3.7/site-packages/ipykernel/iostream.py", line 214, in send_multipart
self.schedule(lambda : self._really_send(*args, **kwargs))
File ".../lib/python3.7/site-packages/ipykernel/iostream.py", line 207, in schedule
f()
File ".../lib/python3.7/site-packages/ipykernel/iostream.py", line 214, in <lambda>
self.schedule(lambda : self._really_send(*args, **kwargs))
File ".../lib/python3.7/site-packages/ipykernel/iostream.py", line 222, in _really_send
self.socket.send_multipart(msg, *args, **kwargs)
AttributeError: 'NoneType' object has no attribute 'send_multipart'
jupyter-server-1.4.1, jupyterlab-3.0.12, jupyterlab-server-2.3.0, ipykernel 5.3.4
I see a similar problem with nbconvert's execute preprocessor (from nbconvert.preprocessors import ExecutePreprocessor).
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/home/rnd/miniconda/envs/rnd_venv/lib/python3.9/logging/__init__.py", line 2129, in shutdown
h.flush()
File "/home/rnd/miniconda/envs/rnd_venv/lib/python3.9/logging/__init__.py", line 1063, in flush
self.stream.flush()
File "/home/rnd/miniconda/envs/rnd_venv/lib/python3.9/site-packages/ipykernel/iostream.py", line 355, in flush
self._flush()
File "/home/rnd/miniconda/envs/rnd_venv/lib/python3.9/site-packages/ipykernel/iostream.py", line 381, in _flush
self.session.send(self.pub_thread, 'stream', content=content,
File "/home/rnd/miniconda/envs/rnd_venv/lib/python3.9/site-packages/jupyter_client/session.py", line 753, in send
stream.send_multipart(to_send, copy=copy)
File "/home/rnd/miniconda/envs/rnd_venv/lib/python3.9/site-packages/ipykernel/iostream.py", line 212, in send_multipart
self.schedule(lambda : self._really_send(*args, **kwargs))
File "/home/rnd/miniconda/envs/rnd_venv/lib/python3.9/site-packages/ipykernel/iostream.py", line 205, in schedule
f()
File "/home/rnd/miniconda/envs/rnd_venv/lib/python3.9/site-packages/ipykernel/iostream.py", line 212, in <lambda>
self.schedule(lambda : self._really_send(*args, **kwargs))
File "/home/rnd/miniconda/envs/rnd_venv/lib/python3.9/site-packages/ipykernel/iostream.py", line 220, in _really_send
self.socket.send_multipart(msg, *args, **kwargs)
AttributeError: 'NoneType' object has no attribute 'send_multipart'
Versions:
ipykernel=5.5.3=py39hef51801_0
nbconvert=6.0.7=py39hf3d152e_3
I don't have in-depth knowledge of the code base, but I guess after close is called and the socket is closed, the socket should not be accessed any more (see my patch below).
def close(self):
if self.closed:
return
self.socket.close()
self.socket = None
@property
def closed(self):
return self.socket is None
...
def send_multipart(self, *args, **kwargs):
"""send_multipart schedules actual zmq send in my thread.
If my thread isn't running (e.g. forked process), send immediately.
"""
self.schedule(lambda : self._really_send(*args, **kwargs))
def _really_send(self, msg, *args, **kwargs):
"""The callback that actually sends messages"""
mp_mode = self._check_mp_mode()
if mp_mode != CHILD:
# we are master, do a regular send
if not self.closed(): # <<<<<<<<<<<<<<<<<<<<<<<<< Perform check on closed socket
self.socket.send_multipart(msg, *args, **kwargs)
else:
# we are a child, pipe to master
# new context/socket for every pipe-out
# since forks don't teardown politely, use ctx.term to ensure send has completed
ctx, pipe_out = self._setup_pipe_out()
pipe_out.send_multipart([self._pipe_uuid] + msg, *args, **kwargs)
pipe_out.close()
ctx.term()
I think this is already fixed in v6.13.0 (thanks to PR #899). Can anyone confirm?