ipykernel icon indicating copy to clipboard operation
ipykernel copied to clipboard

Process hang while executing logging.warning under specific circumstances

Open constcast opened this issue 9 years ago • 11 comments

I identified a kernel hang when calling the python logging facility under certain circumstances. The hang is specific for usage of function in jupyter, i.e. it does not happen in a ipython command shell.

The hang happens when I import a specific module form this repository: https://github.com/secdev/scapy/ (the issue can be reproduced with the version from pip as well).

from scapy.route6 import *

The hang happens in line 234 when this method is called

warning("No route found for IPv6 destination %s (no default route?)" % dst)

warning() is a wrapper that calls

logging.getLogger("scapy.runtime").logging(x)

where x is the message. I am not able to reproduce this issue in a smaller test case, i.e. I cannot find other situations where a call to logging.warning() hangs the kernel.

The reason why I open this issue with jupyter is that the jupyter notebook is the only environment where this results in a hang. The code works without issues in the standard python2.7 interpreter, and in the ipython command shell

I experience the issue on Mac OS X with the following python, and package versions

$ python --version
Python 2.7.10  

ipython and jupyter versions are:

ipykernel==4.3.1
ipython==4.2.0
ipython-genutils==0.1.0
ipywidgets==5.1.2
jupyter==1.0.0
jupyter-client==4.2.2
jupyter-console==4.1.1
jupyter-core==4.1.0

Is there any additional information that I can provide to further debug this problem?

constcast avatar May 01 '16 07:05 constcast

That's very odd, I don't know why it would hang. If you interrupt it while it's hanging, do you get any traceback?

There was a change to the logging setup in the kernel recently - ipython/ipykernel#127. Can you try installing ipykernel from master and see if that makes a difference?

takluyver avatar May 01 '16 10:05 takluyver

There is no backtrace or any other information when I interrupt the kernel.

I installed the current ipykernel master (commit: ffadc6351c0e720b8700108d89daa9d50ba38d0d ). There is no difference in the behavior compared to the one currently available with pip.

Do you have any ideas what I can do to further debug this issue?

constcast avatar May 01 '16 15:05 constcast

Do you have any ideas what I can do to further debug this issue?

Not really, I'm afraid. Double check that it is definitely the log call hanging and not anything else nearby in the code. Dig into the logging module to try to work out exactly what is hanging.

takluyver avatar May 01 '16 16:05 takluyver

@takluyver do you think this is the same issue, and does it help identify what's going wrong? Thanks!

https://github.com/ipython/ipyparallel/issues/161

slishak avatar Jul 04 '16 16:07 slishak

On Windows I'd guess that the issue is trying to write to a nonexistant stderr; once some buffer fills up, the next write call will block. We've seen this before on Windows. This issue was reported on a Mac, so I doubt it's the same, but the cause may be similar.

takluyver avatar Jul 04 '16 16:07 takluyver

I have the same issue when trying to import module rdflib at https://github.com/RDFLib/rdflib.

When importing rdflib from python, it logs this line: INFO:rdflib:RDFLib Version: 4.2.1. And importing from jupyter, the kernel just hang there.

UPDATE: I try to uninstall and install the follow packages from anaconda but the bug's still there:

jupyter==1.0.0 jupyter_client==4.4.0 jupyter_console==5.0.0 jupyter-core==4.2.0 ipykernel==4.5.0 ipyparallel==5.2.0 ipython==5.1.0 ipython-genutils==0.1.0 ipywidgets==5.2.2

Currently, I run this code (disable logging) before importing module to work around this bug:

import logging
logging.getLogger("rdflib").setLevel(logging.ERROR)

binh-vu avatar Jan 05 '17 13:01 binh-vu

I ran into the same problem with one of my own packages last week. Ultimately, this appears to be caused by Python's import lock causing a deadlock when ipkernel.iostream.OutStream.flush is called (by logging.warning or otherwise) during an import. Because of changes to the import logic, the problem may disappear when running in Python >= 3.3 (not tested).

OutStream.flush schedules OutStream._flush in a separate thread (pub_thread), and, via jupyter_client.session.Session.send, Session.msg, Session.msg_header, Session.msg_id and uuid.uuid4, triggers an import in pub_thread which blocks if an import is running in the main thread. At the same time, OutStream.flush waits for pub_thread to return from _flush, resulting in a deadlock.

Minimal example that triggers the behavior:

# hello.py
import sys
sys.stderr.write('hello')
sys.stderr.flush()
> import hello
(never returns)

Execution traces of the main thread and pub_thread. Running IPython 5.1.0 and Jupyter notebook 4.3.1 on Python 2.7.6.

joe1gi avatar Jan 08 '17 23:01 joe1gi

That deadlock should be fixed in ipykernel 4.5.2, so upgrading ipykernel should solve it.

minrk avatar Jan 09 '17 12:01 minrk

Unfortunately, I see the same behavior with ipykernel 4.5.2 (other versions are jupyter_client 4.4.0, jupyter_core 4.2.1, notebook 4.3.1).

Trace of pub_thread

joe1gi avatar Jan 10 '17 06:01 joe1gi

I was able to reproduce it with these simple steps:

  1. created a file named e.g. jupyter_logging_test.py with the following contents and put the file in the PYTHONPATH:
import logging
logging.error('foo')
  1. ran the jupyter-console command
  2. ran the import jupyter_logging_test statement in the console
  3. it hangs

Please note, that it happens only on import, i.e. it doesn't happen when one runs a python file as a script.

The environment:

ipykernel==4.4.1
ipython==5.0.0
ipython-genutils==0.1.0
ipywidgets==5.2.2

OS X Sierra 10.12.3 (16D32)

ikudriavtsev avatar Feb 01 '17 13:02 ikudriavtsev

I haven't verified if this is still an issue, but moving it to the ipykernel repo, since it looks like that's where the fix would need to go.

ivanov avatar Nov 18 '23 22:11 ivanov